7 Algorithms: By-Kurt Angelo Ballarta Bscs-2B
7 Algorithms: By-Kurt Angelo Ballarta Bscs-2B
7 Algorithms: By-Kurt Angelo Ballarta Bscs-2B
Algorithms
Comb Sort………………………………………………………….pg.3
Backjumping……………………………………………………….pg.7
Linear Multistep……………………………………………………pg.12
Brent’s Algorithm………………………….……………………….pg.16
Proximity Algorithm- Largest Sphere Algorithm………………......pg.21
Graph Reduction……………………………………………………pg.23
String Hashing……………………………………………………...pg.25
Comb Sort
Introduction
Comb sort is an improvement over the classic Bubble Sort algorithm. The primary idea behind
Comb Sort is to eliminate the "turtles," or small values near the end of the list, which slow down
the sorting process. It does this by using a larger gap between compared elements and then
progressively reducing the gap size until it becomes one, at which point Comb Sort effectively
becomes Bubble Sort.
if gap > 1:
sorted = false
else:
gap = 1
sorted = true
sorted = false
# Example usage
arr = [34, 8, 64, 51, 32, 21]
combSort(arr)
Implementation in Python
A Worked Example in Comb Sort
Let's walk through a simplified worked example of Comb Sort with an array:
Initial array: [64, 34, 25, 12, 22, 11, 90]
1. Initial gap calculation:
Gap = 7 (length of array) // 1.3 ≈ 5
2. First pass with gap 5:
Compare 64 and 11, swap (64 > 11)
Array after pass: [11, 34, 25, 12, 22, 64, 90]
3. Gap reduction:
Gap = 5 // 1.3 ≈ 3
4. Second pass with gap 3:
Time Complexity
Best Case: 𝑂(𝑁 log 𝑁) when the list is already nearly sorted.
Average Case: Ω(n2/2p)
Worst Case: 𝑂(𝑁 2 ) when the list is in reverse order.
Auxiliary Space
Comb Sort uses O(1) auxiliary space as it is an in-place sorting algorithm.
Conclusion
Comb Sort offers an enhancement over Bubble Sort by effectively reducing the number of
unnecessary swaps and comparisons through the use of a shrinking gap. Although it may not be
the best choice for very large datasets, it provides a simple and efficient alternative for smaller
lists, with a straightforward implementation that can be easily understood and utilized.
References:
https://www.geeksforgeeks.org/comb-sort/
https://en.wikipedia.org/wiki/Comb_sort
Backjumping Algorithm
Introduction
Backjumping is a sophisticated technique used to optimize the backtracking process in constraint
satisfaction problems (CSPs). It improves the efficiency of backtracking algorithms by allowing the
algorithm to jump back more than one level when a dead end is encountered, skipping over certain levels
of the search tree that cannot possibly resolve the conflict. This technique helps reduce the number of
constraints checked and can significantly decrease the search space, making the solving process faster.
Origin of Backjumping
Backjumping was introduced as an enhancement to the traditional backtracking algorithms used in
solving CSPs. In CSPs, the goal is to assign values to variables under certain constraints such that no
constraints are violated. Backtracking is a common method for solving CSPs, where values are assigned
to variables sequentially, and upon encountering a conflict, the algorithm backtracks to the previous
variable to try a different value. Backjumping improves upon this by skipping over some variables,
directly jumping back to a variable that might resolve the conflict.
Advantages of Backjumping
Efficiency: Reduces the number of variables and constraints checked, leading to faster problem-
solving.
Reduced Search Space: By skipping over irrelevant parts of the search tree, the algorithm
focuses on more promising areas.
Improved Performance: Especially beneficial in large and complex CSPs where traditional
backtracking would be too slow.
Disadvantages of Backjumping
Complexity: Implementing backjumping can be more complex compared to simple backtracking.
Overhead: Additional computation is needed to determine where to jump back, which might not
always be beneficial for small or simple problems.
Description of Backjumping
Backjumping works by maintaining information about the constraints that have caused conflicts. When a
conflict is encountered, instead of simply backtracking to the most recent variable, the algorithm analyzes
the cause of the conflict to determine the most appropriate variable to backtrack to. This allows it to skip
over certain levels of the search tree that are irrelevant to the current conflict.
Pseudocode of Backjumping
1. Check Safety:
o Define a function is_safe(board, row, col) to check if placing a queen at
position (row, col) on the board is safe.
o Check if there are any queens in the same column as the current position.
o Check if there are any queens in the upper left diagonal.
o Check if there are any queens in the upper right diagonal.
o If no queen threatens the current position, return True; otherwise, return False.
2. Place Queens:
o Define a function solve_n_queens(board, row) to recursively place queens on
the board.
o Base case: If all queens are placed (i.e., row equals the size of the board), return
True.
o Iterate through each column in the current row.
o If placing a queen at the current position is safe, mark that position with a queen
and recursively call solve_n_queens for the next row.
o If a solution is found in the recursive call, return True.
o If no solution is found, backtrack by removing the queen from the current position
and try the next column.
o If no queen can be placed in the current row, return False.
3. Solve N-Queens Problem:
o Define a function n_queens(n) to solve the N-Queens problem for a given board
size n.
o Create a 2D array (board) of size n x n and initialize it with zeros.
o Call solve_n_queens with the initial board and row index 0.
o If a solution is found, print "Solution exists! Here is one possible configuration:"
followed by the board configuration.
o If no solution is found, print "Solution does not exist for this board size."
Example usage:
Step-by-Step Execution
Time Complexity
The time complexity of the backjumping algorithm depends on the size of the search tree and the
efficiency gained from jumping back multiple levels. O(N!)
Conclusion
Backjumping enhances the traditional backtracking approach by skipping irrelevant parts of the
search tree, which leads to improved efficiency in solving CSPs. Although it introduces
additional complexity in implementation, the performance benefits in large and complex
problems make it a valuable technique in the field of constraint satisfaction.
References
Wikipedia, "Backjumping," https://en.wikipedia.org/wiki/Backjumping.
Linear Multistep Methods
Introduction
Linear multistep methods (LMM) are a family of numerical methods used to solve ordinary
differential equations (ODEs). These methods improve the accuracy of the numerical solution by
using information from multiple previous points to calculate the next point. They can be
categorized into explicit and implicit methods, making them particularly useful for both stiff and
non-stiff differential equations. The use of multiple steps allows for higher-order accuracy
compared to single-step methods like Euler's method.
where 𝛼𝑗\𝑎𝑙𝑝ℎ𝑎_𝑗𝛼𝑗 𝑎𝑛𝑑 𝛽𝑗\𝑏𝑒𝑡𝑎_𝑗𝛽𝑗 𝑎𝑟𝑒 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠, 𝑦𝑛 − 𝑗𝑦_{𝑛 − 𝑗}𝑦𝑛 −j are the
previous points, 𝑓𝑛 − 𝑗𝑓_{𝑛 − 𝑗}𝑓𝑛 − 𝑗 are the function values at those points, and hhh is the
step size.
Time Complexity
The time complexity of linear multistep methods depends on the number of steps n and the order
of the method k. Typically, the time complexity 𝑖𝑠 𝑂(𝑛𝑘) because for each of the n steps, the
method performs a fixed number of operations proportional to k. Higher-order methods might
require more computational effort per step but generally offer better accuracy.
Conclusion
Linear multistep methods enhance the numerical solving of ordinary differential equations by
utilizing multiple previous points for higher accuracy and efficiency. Despite their complexity
and need for initialization, they are powerful tools for solving stiff and non-stiff ODEs, providing
significant improvements over single-step methods.
References
Wikipedia, "Linear multistep method,"
https://en.wikipedia.org/wiki/Linear_multistep_method.
LibreTexts, "Linear Multistep Methods,"
https://math.libretexts.org/Bookshelves/Differential_Equations/Numerically_Solving_Or
dinary_Differential_Equations_(Brorson)/01%3A_Chapters/1.06%3A_Linear_multistep_
methods.
Scholarpedia, "Linear Multistep Method,"
http://www.scholarpedia.org/article/Linear_multistep_method.
Brent's Algorithm
Introduction
Brent's Algorithm is a root-finding algorithm that combines elements of root bracketing,
bisection, and inverse quadratic interpolation to find the roots of a real-valued function. This
method is renowned for its robustness and efficiency, often outperforming simpler methods like
bisection and the secant method. Developed by Richard Brent in 1973, this algorithm guarantees
convergence under certain conditions and is particularly effective for functions where derivatives
may not be readily available.
Root-finding algorithms are essential in numerical analysis for solving equations of the form
f(x)=0f(x) = 0f(x)=0. Brent's method, also known as the van Wijngaarden-Deker-Brent method,
combines several techniques to enhance both speed and reliability. This paper outlines the
principles behind Brent's method, providing a comprehensive explanation, pseudocode, and a
detailed example.
Brent's Algorithm
Brent's method strategically combines three primary techniques:
1. Root Bracketing: Ensures that the root lies within a specified interval.
2. Bisection Method: Narrows down the interval by repeatedly bisecting it and selecting
subintervals where the root lies.
3. Inverse Quadratic Interpolation: Uses a quadratic polynomial to approximate the root,
which can speed up convergence.
Algorithm Description
The algorithm starts with three points x1x_1x1, x2x_2x2, and x3x_3x3 that bracket the root. It
then performs inverse quadratic interpolation to predict the next point. If the predicted point does
not sufficiently reduce the interval or lies outside the interval, the algorithm falls back to
bisection.
Steps:
Worked Example
Consider 𝑓(𝑥) = 𝑥2 − 2𝑓(𝑥) = 𝑥^2 − 2𝑓(𝑥) = 𝑥2 − 2, to find the square root of 2:
Conclusion
Brent's method is a powerful root-finding algorithm that guarantees convergence and combines
the reliability of bisection with the speed of inverse quadratic interpolation. Its hybrid approach
makes it suitable for a wide range of applications in numerical analysis.
References
1. Wolfram MathWorld, Brent's Method.
2. Wikipedia, Brent's method.
3. GeeksforGeeks, Brent's Cycle Detection Algorithm.
Proximity Algorithm - Largest Empty Sphere
Algorithm
Introduction
The largest empty sphere problem involves finding the largest possible hypersphere that can fit
within a given space without overlapping any specified obstacles. This problem is a key concept
in computational geometry, with applications in fields such as clustering, collision detection, and
spatial analysis.
In computational geometry, the largest empty sphere problem focuses on determining the largest
hypersphere that fits in a d-dimensional space without intersecting any given obstacles. This
problem can be applied to several real-world scenarios, including optimizing layouts and
maximizing space utilization.
Problem Definition
Given a set of obstacles in d-dimensional space, the goal is to find the largest hypersphere such
that its interior does not intersect any obstacle. The problem can be simplified to two dimensions,
where it involves finding the largest empty circle.
Algorithm Description
1. Initial Setup: Identify all obstacles and their coordinates in the given space.
2. Convex Hull Calculation: Compute the convex hull of the obstacles to define the
boundary within which the largest empty sphere will be located.
3. Voronoi Diagram Construction: Create a Voronoi diagram for the obstacle points. The
vertices of the Voronoi diagram are potential candidates for the sphere's center.
4. Sphere Calculation: For each candidate point, calculate the largest sphere that can be
centered at that point without intersecting any obstacles.
5. Optimization: Select the candidate point that allows for the largest sphere.
Pseudocode
function largestEmptySphere(obstacles, dimension):
convexHull = computeConvexHull(obstacles)
voronoiDiagram = constructVoronoiDiagram(obstacles)
largestSphere = null
maxRadius = 0
return largestSphere
Worked Example
Consider a 2D example with obstacles at coordinates (1,1)(1, 1)(1,1), (2,2)(2, 2)(2,2), and
(3,1)(3, 1)(3,1):
1. Compute the convex hull, which forms a triangle encompassing all points.
2. Construct the Voronoi diagram.
3. Identify Voronoi vertices as potential centers for the largest empty circle.
4. Calculate the largest circle for each vertex.
5. Select the vertex yielding the largest circle.
Conclusion
The largest empty sphere algorithm is essential in computational geometry for optimizing space
utilization and solving proximity problems. By leveraging techniques such as convex hull
computation and Voronoi diagrams, this method provides an efficient solution to finding the
largest sphere that can fit within a given set of obstacles.
References
1. Wikipedia, Proximity problems.
2. Wikipedia, Largest empty sphere.
Graph Reduction
Introduction
Graph reduction is a technique in computer science used to implement non-strict (lazy)
evaluation. This method efficiently evaluates expressions by transforming them into graphs and
reducing these graphs. It is essential in functional programming languages, enabling more
effective computation and resource utilization.
Key Steps:
1. Program Conversion: Convert the program into combinator form.
2. Graph Mapping: Map the combinator representation to a graph data structure.
3. Graph Reduction: Perform reduction on the graph to compute the result.
Pseudocode
function graphReduction(expression):
graph = convertToGraph(expression)
while not isReduced(graph):
for node in graph:
if isReducible(node):
reduce(node)
return graph.getResult()
Worked Example
1. 𝑻𝒓𝒆𝒆 𝑹𝒆𝒑𝒓𝒆𝒔𝒆𝒏𝒕𝒂𝒕𝒊𝒐𝒏:
o 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑡𝑟𝑒𝑒: ((2 + 2) + (3 + 3))((2 + 2) + (3 + 3))((2 + 2) + (3 + 3))
o 𝑅𝑒𝑑𝑢𝑐𝑒𝑑 𝑡𝑟𝑒𝑒: (4 + 6)(4 + 6)(4 + 6)
o 𝑅𝑒𝑠𝑢𝑙𝑡: 101010
2. 𝑮𝒓𝒂𝒑𝒉 𝑹𝒆𝒑𝒓𝒆𝒔𝒆𝒏𝒕𝒂𝒕𝒊𝒐𝒏:
o 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑔𝑟𝑎𝑝ℎ: 𝑁𝑜𝑑𝑒𝑠 𝑓𝑜𝑟 𝑒𝑎𝑐ℎ 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛 𝑎𝑛𝑑 𝑒𝑑𝑔𝑒𝑠 𝑓𝑜𝑟 𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑐𝑖𝑒𝑠.
o 𝑅𝑒𝑑𝑢𝑐𝑒𝑑 𝑔𝑟𝑎𝑝ℎ: 𝑆ℎ𝑎𝑟𝑒𝑑 𝑠𝑢𝑏𝑒𝑥𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛𝑠 𝑒𝑣𝑎𝑙𝑢𝑎𝑡𝑒𝑑 𝑜𝑛𝑐𝑒.
Conclusion
Graph reduction is a powerful technique for implementing lazy evaluation in functional
programming. By representing expressions as graphs and applying reductions, it optimizes the
computation process, ensuring efficiency and avoiding redundant calculations.
References
1. Wikipedia, Graph reduction.
String Hashing
Abstract
String hashing is a technique used to convert a string into a numerical value, or hash, that
uniquely represents the content of the string. This technique is fundamental in various
applications, such as string comparison, data retrieval, and cryptography. Efficient string hashing
ensures quick lookups and comparisons, making it an essential tool in competitive programming
and computer science.
Introduction
String hashing involves converting a string into a fixed-size integer, which uniquely identifies
the string's content. The primary goals of string hashing are to achieve fast computation and to
minimize collisions (different strings having the same hash value). This paper explains the
principles of string hashing, provides a pseudocode implementation, and discusses its time and
space complexities.
Mathematical Representation:
Given a string SSS of length nnn with characters 𝑆[0], 𝑆[1], … , 𝑆[𝑛 − 1]𝑆[0], 𝑆[1],\𝑙𝑑𝑜𝑡𝑠, 𝑆[𝑛 −
1]𝑆[0], 𝑆[1], … , 𝑆[𝑛 − 1], the hash value H is computed as: 𝐻 = 𝑆[0] ⋅ 𝑝0 + 𝑆[1] ⋅ 𝑝1 + 𝑆[2] ⋅
𝑝2 + ⋯ + 𝑆[𝑛 − 1] ⋅ 𝑝𝑛 − 1𝑚𝑜𝑑 𝑚𝐻 = 𝑆[0] \𝑐𝑑𝑜𝑡 𝑝^{0} + 𝑆[1] \𝑐𝑑𝑜𝑡 𝑝^{1} +
𝑆[2] \𝑐𝑑𝑜𝑡 𝑝^{2} + \𝑙𝑑𝑜𝑡𝑠 + 𝑆[𝑛 − 1] \𝑐𝑑𝑜𝑡 𝑝^{𝑛 − 1} \𝑚𝑜𝑑 𝑚𝐻 = 𝑆[0] ⋅ 𝑝0 + 𝑆[1] ⋅
𝑝1 + 𝑆[2] ⋅ 𝑝2 + ⋯ + 𝑆[𝑛 − 1] ⋅ 𝑝𝑛 − 1𝑚𝑜𝑑𝑚 where p is a prime number (base) and mmm is a
large prime modulus to avoid overflow and reduce collisions.
Algorithm Description
String hashing can be efficiently implemented using a rolling hash approach, which allows the
hash value of a substring to be computed quickly based on the hash value of the previous
substring.
Steps:
1. Initialization: Select appropriate values for the base ppp and modulus mmm.
2. Hash Computation: Compute the hash value of the string using the polynomial rolling
hash formula.
3. Rolling Hash: Update the hash value for substrings efficiently using the previous hash
value.
Pseudocode
𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝐻𝑎𝑠ℎ(𝑆, 𝑝, 𝑚): 𝑛 = 𝑙𝑒𝑛𝑔𝑡ℎ(𝑆) ℎ𝑎𝑠ℎ_𝑣𝑎𝑙𝑢𝑒 = 0 𝑝_𝑝𝑜𝑤
= 1 # 𝑝^0 𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑙𝑦 𝑓𝑜𝑟 𝑖 𝑓𝑟𝑜𝑚 0 𝑡𝑜 𝑛 − 1: ℎ𝑎𝑠ℎ_𝑣𝑎𝑙𝑢𝑒
= (ℎ𝑎𝑠ℎ_𝑣𝑎𝑙𝑢𝑒 + (𝑜𝑟𝑑(𝑆[𝑖]) − 𝑜𝑟𝑑(′𝑎′) + 1) ∗ 𝑝_𝑝𝑜𝑤) % 𝑚 𝑝_𝑝𝑜𝑤
= (𝑝_𝑝𝑜𝑤 ∗ 𝑝) % 𝑚 𝑟𝑒𝑡𝑢𝑟𝑛 ℎ𝑎𝑠ℎ_𝑣𝑎𝑙𝑢𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑟𝑜𝑙𝑙𝑖𝑛𝑔𝐻𝑎𝑠ℎ(𝑆, 𝑝, 𝑚): 𝑛
= 𝑙𝑒𝑛𝑔𝑡ℎ(𝑆) ℎ𝑎𝑠ℎ𝑒𝑠 = 𝑎𝑟𝑟𝑎𝑦 𝑜𝑓 𝑠𝑖𝑧𝑒 𝑛 ℎ𝑎𝑠ℎ𝑒𝑠[0]
= 𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝐻𝑎𝑠ℎ(𝑆[0], 𝑝, 𝑚) 𝑝_𝑝𝑜𝑤 = 𝑝 𝑓𝑜𝑟 𝑖 𝑓𝑟𝑜𝑚 1 𝑡𝑜 𝑛 − 1: ℎ𝑎𝑠ℎ𝑒𝑠[𝑖]
= (ℎ𝑎𝑠ℎ𝑒𝑠[𝑖 − 1] ∗ 𝑝 + (𝑜𝑟𝑑(𝑆[𝑖]) − 𝑜𝑟𝑑(′𝑎′) + 1)) % 𝑚 𝑟𝑒𝑡𝑢𝑟𝑛 ℎ𝑎𝑠ℎ𝑒𝑠
plaintext
Copy code
function computeHash(S, p, m):
n = length(S)
hash_value = 0
p_pow = 1 # p^0 initially
for i from 0 to n-1:
hash_value = (hash_value + (ord(S[i]) - ord('a') + 1) * p_pow) % m
p_pow = (p_pow * p) % m
return hash_value
Worked Example
Time Complexity
Hash Computation: 𝑂(𝑛)for computing the hash of a string of length nnn.
Rolling Hash: 𝑂(𝑛) for computing the hashes of all substrings.
Auxiliary Space Complexity
Space Complexity: O(n) for storing the hash values of substrings.]
Conclusion
String hashing is an efficient technique for string comparison and retrieval, crucial in various
computational applications. By using polynomial rolling hash functions and rolling hash
methods, we can achieve fast and reliable string hashing, ensuring efficient data processing and
minimal collisions.
References
1. CP-Algorithms, String Hashing.