Slide 1.pptx-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Introduction

Algorithms
What is Algorithm?

► A finite set of instruction that specifies a sequence of operation is to be


carried out in order to solve a specific problem or class of problems is called
an Algorithm.
Why study Algorithm?

► As the speed of processor increases, performance is frequently said to be less


central than other software quality characteristics (e.g. security,
extensibility, reusability etc.). However, large problem sizes are commonplace
in the area of computational science, which makes performance a very
important factor. This is because longer computation time, to name a few
mean slower results, less through research and higher cost of computation (if
buying CPU Hours from an external party). The study of Algorithm, therefore,
gives us a language to express performance as a function of problem size.
Design and Analysis of Algorithms

► Design and analysis of algorithms is a crucial subject of computer science


technology that deals with developing and studying efficient algorithms for
fixing computational issues. It entails several steps, which includes problem
formulation, algorithm layout, algorithm analysis, and algorithm optimization.

► The problem formulation process entails identifying the computational


problem to be solved as well as specifying the input and output criteria. The
algorithm design process entails creating a set of instructions that a computer
can use to solve the problem. The algorithm analysis process entails
determining the method's efficiency in terms of time and space complexity.
Finally, the algorithm optimization process involves enhancing the method's
efficiency by making changes to the design or implementation.
Continued…

► There are several strategies for any algorithm's design and evaluation,
including brute force algorithms, divide and conquer algorithms, dynamic
programming, and greedy algorithms. Each method has its very own strengths
and weaknesses, and the choice of approach depends on the nature of the
problem being solved.

► Algorithm analysis is often performed by examining the algorithm's worst-case


time and space complexity. The time complexity of an algorithm refers to the
amount of time it takes to clear up a problem as a characteristic of the input
size. The space complexity of an algorithm refers to the quantity of memory
required to solve a problem as a function of the enter length.
Continued…

► Efficient algorithm design and evaluation are vital for solving huge-scale
computational problems in areas which include facts technology, artificial
intelligence, and computational biology.
What is meant by Algorithm Analysis?

► Algorithm analysis refers to how to investigate the effectiveness of the algorithm in


terms of time and space complexity. The fundamental purpose of algorithm
evaluation is to decide how much time and space an algorithm needs to solve the
problem as a feature of the scale of the input. The time complexity of an algorithm
is typically measured in phrases of the wide variety of simple operations (which
includes comparisons, assignments, and mathematics operations) that the algorithm
plays at the enter records. The spatial complexity of an algorithm refers to the
quantity of reminiscence the algorithm needs to solve the problem as a function of
the size of the input. Algorithm analysis is crucial because it facilitates us to
examine different strategies and pick the best one for a given problem. It
additionally enables us pick out overall performance issues and improve algorithms
to enhance their overall performance. There are many approaches to research the
time and space of algorithms, together with big O notation, big Omega notation,
and big Theta notation. These notations offer a manner to specify the increase rate
of an algorithm's time or area requirements as the input length grows large.
Why is Algorithm Analysis important?

► To forecast the behavior of an algorithm without putting it into action on a


specific computer.
► It is far more convenient to have basic metrics for an algorithm's efficiency
than to develop the algorithm and access its efficiency each time a specific
parameter in the underlying computer system changes.
► It is hard to predict an algorithm's exact behavior. There are far too many
variables to consider.
► As a result, the analysis is simply an approximation; it is not perfect.
► More significantly, by comparing several algorithms, we can identify which
one is ideal for our needs.
Types of Algorithm Analysis:

1. Time complexity evaluation: This kind of analysis measures the running


time of an algorithm as a characteristic of the input length. It typically
entails counting the quantity of primary operations completed by way of
the algorithm, such as comparisons, mathematics operations, and
reminiscence accesses.
2. Space complexity evaluation: This form of evaluation measures the
amount of memory required via an algorithm as a characteristic of the
enter size. It typically includes counting the variety of variables and
information systems utilized by the algorithm, as well as the size of each
of these records structures.
3. Worst-case evaluation: This type of analysis measures the worst-case
running time or space utilization of an algorithm, assuming the enter is the
maximum toughest viable for the algorithm to deal with.
Continued…

1. Average-case analysis: This kind of evaluation measures the predicted


walking time or area usage of an algorithm, assuming a probabilistic
distribution of inputs.
2. Best-case evaluation: This form of analysis measures the nice case
running time or area utilization of an algorithm, assuming the input is the
easiest possible for the algorithm to address.
3. Asymptotic analysis: This sort of analysis measures the overall
performance of an algorithm as the enter size methods infinity. It normally
includes the usage of mathematical notation to describe the boom fee of
the algorithm's strolling time or area usage, including O(n), Ωn), or Θn).
Advantages of design and analysis of
algorithm:
1. Improved efficiency: A properly designed algorithm can notably improve
the performance of a program, leading to quicker execution instances and
reduced resource utilization. By studying algorithms and identifying
regions of inefficiency, developers can optimize the algorithm to lessen its
time and space complexity.
2. Better scalability: As the size of the input information will increase, poorly
designed algorithms can quickly turn out to be unmanageable, leading to
slow execution times and crashes. By designing algorithms that scale well
with increasing input sizes, developers can make certain that their
packages stay usable while the facts they take care of grows.
Continued…

1. Improved code exceptional: A nicely designed algorithm can result in


better code first-rate standard, because it encourages developers to think
seriously about their application's shape and organization. By breaking
down complicated issues into smaller, extra manageable subproblems,
builders can create code that is simpler to recognize and maintain.
2. Increased innovation: By knowing how algorithms work and how they can
be optimized, developers can create new and progressive solutions to
complex problems. This can lead to new merchandise, services, and
technologies which can have a considerable impact on the arena.
3. Competitive benefit: In industries where pace and performance are vital,
having properly designed algorithms can provide an extensive competitive
advantage. By optimizing algorithms to lessen expenses and enhance
performance, groups can gain a facet over their competitors.
Applications:

1. Search engines: Google and other search engines use complex


algorithms to index and rank websites, ensuring that users get the most
relevant search results.
2. Machine Learning: Machine learning algorithms are used to train
computer programs to learn from data and make predictions or decisions
based on that data. It is used in applications such as image recognition,
speech recognition, and natural language processing.
3. Cryptography: Cryptographic algorithms are used to secure data
transmission and protect sensitive information such as credit card
numbers and passwords.
Continued…

1. Optimization: Optimization algorithms are used to find the optimal solution


to a problem, such as the shortest path between two points or the most
efficient resource allocation path.
2. Finance: Algorithms are used in finance for applications such as risk
assessment, fraud detection, and frequent trading.
3. Games: Game developers use artificial intelligence and algorithms to
navigate, allowing game characters to make intelligent decisions and
navigate game environments more efficiently
4. Data Analytics: Data analytics applications use algorithms to process
large amounts of data and extract meaningful insights, such as trends and
patterns.
5. Robotics: Robotics algorithms are used to control robots and enable them
to perform complex tasks such as recognizing and manipulating objects.
Types of Algorithm Analysis

► There are one-of-a-kind styles of algorithm analysis which are used to


evaluate the efficiency of algorithms. Here are several and the most
usually used types:
1. Time complexity evaluation: This kind of analysis specializes in the
amount of time an algorithm takes to execute as a characteristic of the
input length. It measures the range of operations or steps an algorithm
takes to resolve a problem and expresses this in phrases of big O notation.
2. Space complexity evaluation: This type of analysis specializes in the
amount of memory an algorithm requires to execute as a function of the
input length. It measures the quantity of memory utilized by the algorithm
to clear up a problem and expresses this in terms of big O notation.
3. Best-case evaluation: This form of evaluation determines the minimal
amount of time or memory, and algorithm calls for to resolve a problem for
any input size. It is typically expressed in terms of big O notation.
Example

► Consider the linear search to compute the best time complexity as an example of
best-case analysis. Assume you have an array of integers and need to find a
number.
► Find the code for the above problem below:
1. int linear_search(int arr, int l, int target) {
2. int i;
3. for (i = 0; i < l; i++) {
4. if (arr[i] == target) {
5. return arr[i]
6. }
7. }
8. return 1
9. }
Continued…

► Assume the number you're looking for is present at the array's very first
index. In such instances, your method will find the number in O 1) time in
the best case. As a result, the best-case complexity for this algorithm is O
1, and the output is constant time. In practice, the best case is rarely
required for measuring the runtime of algorithms. The best-case scenario
is never used to design an algorithm.
► Worst-case evaluation: This sort of analysis determines the maximum
quantity of time or memory an algorithm requires to resolve a problem for
any enter length. It is normally expressed in phrases of big O notation.
Continued…

► Consider our last example, where we were executing the linear search.
Assume that this time the element we're looking for is at the very end of
the array. As a result, we'll have to go through the entire array before we
discover the element. As a result, the worst case for this method is ON.
Because we must go through at least NN elements before we discover our
destination. So, this is how we calculate the algorithms' worst case.
Continued…

► 5. Average-case evaluation: This type of evaluation determines the


predicted quantity of time or memory an algorithm requires to remedy a
problem over all possible inputs. It is usually expressed in phrases with big
O notation.
► 6. Amortized analysis: This type of evaluation determines the average
time or memory utilization of a sequence of operations on a records
structure, in preference to just one operation. It is frequently used to
investigate statistics systems which include dynamic arrays and binary
hundreds.
► These forms of evaluation assist us to recognize the overall performance
of an algorithm and pick out the first-rate algorithm for a specific problem.
Divide and Conquer

► Divide and conquer is a powerful algorithmic method utilized in computer


technology to solve complicated problems correctly. The idea behind this
approach is to divide a complex problem into smaller, simpler sub-problems,
clear up every sub-problem independently, and then integrate the answers to
obtain the very last solution. This technique is based on the rule that it's far
regularly less difficult to solve a smaller, less complicated problem than a
bigger, more complicated one.
► The divide and conquer method is frequently utilized in algorithm design for
fixing an extensive range of problems, including sorting, searching, and
optimization. The method may be used to layout efficient algorithms for
problems which are in any other case difficult to clear up. The key concept is
to recursively divide the problem into smaller sub-problems, and solve each
sub-problem independently, after which combine the solutions to achieve the
very last answer.
The divide and conquer technique may
be divided down into 3 steps:
1. Divide: In this step, the problem is divided down into smaller
sub-problems. This step entails identifying the important thing
components of the problem and identifying the best way to partition it into
smaller, more potential sub-problems. The sub-problems should be
smaller than the authentic problem, but nevertheless, incorporate all the
necessary data to solve the problem.
2. Conquer: In this step, each sub-problem is solved independently. This
step involves applying the necessary algorithms and techniques to clear
up every sub-problem. The purpose is to expand an answer this is as
efficient as viable for each sub-problem.
3. Combine: In this step, the solutions to the sub-problems are combined to
attain the very last option to the authentic problem. This step entails
merging the solutions from each sub-problem into a single solution. The
aim is to make certain that the very last answer is correct and green.
Continued…

► One of the most popular examples of the divide and conquer over
technique is the merge kind algorithm, that's used to sort an array of
numbers in ascending or descending order. The merge sort algorithm
works by means of dividing the array into two halves, sorting each half
one by one, and then merging the looked after halves to reap the very last
sorted array. The algorithm works as follows:
1. Divide: The array is split into halves recursively until each half has only
one detail.
2. Conquer: Each sub-array is sorted using the merge type algorithm
recursively.
3. Combine: The sorted sub-arrays are merged to attain the very last sorted
array.
Continued…

► Another example of the divide and conquer method is the binary search
algorithm, that is used to find the position of a target value in a sorted
array. The binary search algorithm works by again and again dividing the
array into two halves till the target value is found or determined to be not
gift inside the array. The algorithm works as follows:
1. Divide: The array is split into two halves.
2. Conquer: The algorithm determines which half of the array the target
position is in or determines that the target position is not there in the array.
3. Combine: The very last position of the target position within the array is
determined.
Searching and traversal techniques

► Searching and traversal techniques are used in computer science to


traverse or search through data structures such as trees, graphs, and
arrays. There are several common techniques used for searching and
traversal, including:
1. Linear Search: Linear search is a simple technique used to search an
array or list for a specific element. It works by sequentially checking each
element of the array until the target element is found, or the end of the
array is reached.
2. Binary Search: Binary search is a more efficient technique for searching a
sorted array. It works by repeatedly dividing the array in half and checking
the middle element to determine if it is greater than or less than the target
element. This process is repeated until the target element is found, or the
end of the array is reached.
Continued…

1. Depth-First Search DFS DFS is a traversal technique used to traverse


graphs and trees. It works by exploring each branch of the graph or tree
as deeply as possible before backtracking to explore other branches. DFS
is implemented recursively and is useful for finding connected
components and cycles in a graph.
2. Breadth-First Search BFS BFS is another traversal technique used to
traverse graphs and trees. It works by exploring all the vertices at the
current level before moving on to explore the vertices at the next level.
BFS is implemented using a queue and is useful for finding the shortest
path between two vertices in a graph.
Continued…

1. Dijkstra's Algorithm: Dijkstra's algorithm is a search algorithm used to


find the shortest path between two nodes in a weighted graph. It works by
starting at the source node and iteratively selecting the node with the
smallest distance from the source until the destination node is reached.
► A Algorithm: A* algorithm is a heuristic search algorithm used for
pathfinding and graph traversal. It combines the advantages of BFS and
Dijkstra's algorithm by using a heuristic function to estimate the distance
to the target node.
Greedy Method:

► The greedy method is a problem-solving strategy in the design and


analysis of algorithms. It is a simple and effective approach to solving
optimization problems that involves making a series of choices that result
in the most optimal solution.
► In the greedy method, the algorithm makes the locally optimal choice at
each step, hoping that the sum of the choices will lead to the globally
optimal solution. This means that at each step, the algorithm chooses the
best available option without considering the future consequences of that
decision.
► The greedy method is useful when the problem can be broken down into a
series of smaller subproblems, and the solution to each subproblem can
be combined to form the overall solution. It is commonly used in problems
involving scheduling, sorting, and graph algorithms.
Continued…

► However, the greedy method does not always lead to the optimal solution,
and in some cases, it may not even find a feasible solution. Therefore, it is
important to verify the correctness of the solution obtained by the greedy
method.
► To analyze the performance of a greedy algorithm, one can use the
greedy-choice property, which states that at each step, the locally optimal
choice must be a part of the globally optimal solution. Additionally, the
optimal substructure property is used to show that the optimal solution to
a problem can be obtained by combining the optimal solutions to its
subproblems.
Continued…

► The greedy method has several advantages that make it a useful


technique for solving optimization problems. Some of the advantages are:
1. Simplicity: The greedy method is a simple and easy-to-understand
approach, making it a popular choice for solving optimization problems.
2. Efficiency: The greedy method is often very efficient in terms of time and
space complexity, making it ideal for problems with large datasets.
3. Flexibility: The greedy method can be applied to a wide range of
optimization problems, including scheduling, graph algorithms, and data
compression.
4. Intuitive: The greedy method often produces intuitive and easily
understandable solutions, which can be useful in decision-making.
The greedy method is widely used in a
variety of applications, some of which are:

1. Scheduling: The greedy method is used to solve scheduling problems,


such as job scheduling, task sequencing, and project management.
2. Graph Algorithms: The greedy method is used to solve problems in graph
theory, such as finding the minimum spanning tree and shortest path in a
graph.
3. Data Compression: The greedy method is used to compress data, such as
image and video compression.
4. Resource Allocation: The greedy method is used to allocate resources,
such as bandwidth and storage, in an optimal manner.
5. Decision Making: The greedy method can be used to make decisions in
various fields, such as finance, marketing, and healthcare.
Continued…

► The Greedy method is a powerful and versatile technique that can be


applied to a wide range of optimization problems. Its simplicity, efficiency,
and flexibility make it a popular choice for solving such problems in
various fields.
Dynamic Programming:

► Dynamic programming is a problem-fixing approach in laptop technology


and arithmetic that includes breaking down complex issues into less
complicated overlapping subproblems and solving them in a bottom-up
manner. It is commonly used to optimize the time and space complexity of
algorithms by way of storing the outcomes of subproblems and reusing
them as wished.
► The simple idea in the back of dynamic programming is to resolve a
problem with the aid of fixing its smaller subproblems and mixing their
solutions to acquire the answer to the unique problem. This method is
frequently referred to as "memorization"; because of this storing the
effects of expensive feature calls and reusing them whilst the same inputs
occur once more.
Continued…

► The key concept in dynamic programming is the perception of a most


beneficial substructure. If a problem may be solved optimally by means of
breaking it down into smaller subproblems and fixing them independently,
then it famous most useful substructure. This belonging lets in dynamic
programming algorithms to construct a most reliable solution by means of
making regionally top of the line picks and mixing them to form a globally
choicest
► Dynamic programming algorithms typically use a desk or an array to keep the
solutions to subproblems. The desk is stuffed in a systematic manner,
beginning from the smallest subproblems, and regularly constructing as much
as the larger ones. This manner is known as "tabulation".
Continued…

► One critical feature of dynamic programming is the ability to avoid redundant


computations. By storing the answers of subproblems in a desk, we are able
to retrieve them in regular time rather than recomputing them. This ends in
large performance upgrades, while the same subproblems are encountered
multiple instances.
► Dynamic programming can be applied to a wide range of issues, such as
optimization, pathfinding, series alignment, useful resource allocation, and
greater. It is especially useful while the problem reveals overlapping
subproblems and most efficient substructure.
Advantages:

► Optimal Solutions: Dynamic programming ensures finding the most reliable


strategy to a problem through thinking about all viable subproblems. By
breaking down a complicated problem into smaller subproblems, it
systematically explores all the potential answers and combines them to reap
the fine overall answer.
► Efficiency: Dynamic programming can extensively improve the performance of
algorithms by using avoiding redundant computations. By storing the answers
of subproblems in a desk or array, it removes the want to recalculate them
while encountered again, main to quicker execution instances.
► Overlapping Subproblems: Many real-world problems exhibit overlapping
subproblems, in which the same subproblems are solved more than one
instances. Dynamic programming leverages these assets by means of storing
the solutions of subproblems and reusing them when needed. This technique
reduces the general computational attempt and improves efficiency.
Continued…

• Break Complex Problems into Smaller Parts: Dynamic programming


breaks down complex problems into easier, extra possible subproblems.
By specializing in solving those smaller subproblems independently, it
simplifies the general problem-fixing method and makes it easier to layout
and put in force algorithms.
• Applicable to a Wide Range of Problems: Dynamic programming is a
versatile technique applicable to various forms of problems, which include
optimization, useful resource allocation, sequence alignment, shortest
path, and plenty of others. It provides a structured technique to
problem-solving and may be tailored to distinctive domains and
eventualities.
• Flexibility: Dynamic programming permits for bendy problem-solving
strategies. It can be applied in a bottom-up manner, solving subproblems
iteratively and constructing up to the final answer. It also can be used in a
pinnacle-down way, recursively fixing subproblems and memorizing the
effects. This flexibility permits programmers to pick the technique that
Continued…

► Mathematical Foundation: Dynamic programming has a stable mathematical


foundation, which presents a rigorous framework for analyzing and
understanding the conduct of algorithms. This basis allows for the
improvement of finest and green solutions based on the problem's
characteristics and homes.
Dynamic Programming

► In precise, dynamic programming is a problem-solving method that breaks


down complex problems into less complicated subproblems, solves them
independently, and combines their solutions to obtain the solution to the
authentic problem. It optimizes the computation by means of reusing the
consequences of subproblems, warding off redundant calculations, and
reaching efficient time and space complexity.

► Dynamic programming is a method for solving complicated issues by breaking


them down into smaller subproblems. The answers to those subproblems are
then blended to find the answer to the original problem. Dynamic
programming is regularly used to solve optimization problems, consisting of
locating the shortest direction between factors or the most profit that can be
crafted from a hard and fast of assets.
Continued…

► Dynamic programming is an effective method that may be used to clear up


a extensive kind of issues. However, it's miles critical to word that now not
all problems can be solved the usage of dynamic programming. To apply
dynamic programming, the problem must have the following properties:
• Overlapping subproblems: The problem must be capable of being damaged
down into smaller subproblems such that the answer to each subproblem may
be used to solve the original problem.
• Optimal substructure: The most useful option to the authentic problem must be
the sum of the most appropriate solutions to the subproblems.
► If a problem does not now have these properties, then dynamic
programming can't be used to clear up it.
Summary

• Divide and Conquer: Breaks the problem into independent subproblems and
combines their solutions.
• Greedy Algorithm: Makes the best local choice at each step without
reconsidering previous choices.
• Dynamic Programming: Solves subproblems recursively and stores their
solutions to avoid redundancy.
?

You might also like