ML Module V

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

INTRODUCTION TO ML

MODULE-V
1.Explain about Reinforcement Learning
A.Reinforcement learning is an area of Machine Learning. It is about taking suitable action to
maximize reward in a particular situation. It is employed by various software and machines to
find the best possible behavior or path it should take in a specific situation. Reinforcement
learning differs from supervised learning in a way that in supervised learning the training data
has the answer key with it so the model is trained with the correct answer itself whereas in
reinforcement learning, there is no answer but the reinforcement agent decides what to do to
perform the given task. In the absence of a training dataset, it is bound to learn from its
experience.
Reinforcement Learning (RL) is the science of decision making. It is about learning the
optimal behavior in an environment to obtain maximum reward. In RL, the data is accumulated
from machine learning systems that use a trial-and-error method. Data is not part of the input
that we would find in supervised or unsupervised machine learning.
Reinforcement learning uses algorithms that learn from outcomes and decide which action to
take next. After each action, the algorithm receives feedback that helps it determine whether
the choice it made was correct, neutral or incorrect. It is a good technique to use for automated
systems that have to make a lot of small decisions without human guidance.
Reinforcement learning is an autonomous, self-teaching system that essentially learns by trial
and error. It performs actions with the aim of maximizing rewards, or in other words, it is
learning by doing in order to achieve the best outcomes
Main points in Reinforcement learning –

 Input: The input should be an initial state from which the model will start
 Output: There are many possible outputs as there are a variety of solutions to a
particular problem
 Training: The training is based upon the input, The model will return a state and the
user will decide to reward or punish the model based on its output.
 The model keeps continues to learn.
 The best solution is decided based on the maximum reward.
Types of Reinforcement:

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

There are two types of Reinforcement:


1. Positive: Positive Reinforcement is defined as when an event, occurs due to a
particular behavior, increases the strength and the frequency of the behavior. In
other words, it has a positive effect on behavior.
Advantages of reinforcement learning are:
 Maximizes Performance
 Sustain Change for a long period of time
 Too much Reinforcement can lead to an overload of states which can
diminish the results
2. Negative: Negative Reinforcement is defined as strengthening of behavior because
a negative condition is stopped or avoided.
Advantages of reinforcement learning:
 Increases Behavior
 Provide defiance to a minimum standard of performance
 It Only provides enough to meet up the minimum behavior
Elements of Reinforcement Learning
Reinforcement learning elements are as follows:
1. Policy
2. Reward function
3. Value function
4. Model of the environment
Policy: Policy defines the learning agent behavior for given time period. It is a mapping from
perceived states of the environment to actions to be taken when in those states.
Reward function: Reward function is used to define a goal in a reinforcement learning
problem.A reward function is a function that provides a numerical score based on the state of
the environment
Value function: Value functions specify what is good in the long run. The value of a state is
the total amount of reward an agent can expect to accumulate over the future, starting from that
state.
Model of the environment: Models are used for planning.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

Credit assignment problem: Reinforcement learning algorithms learn to generate an


internal value for the intermediate states as to how good they are in leading to the goal. The
learning decision maker is called the agent. The agent interacts with the environment that
includes everything outside the agent.
The agent has sensors to decide on its state in the environment and takes action that modifies
its state.

The reinforcement learning problem model is an agent continuously interacting with an


environment. The agent and the environment interact in a sequence of time steps. At each time
step t, the agent receives the state of the environment and a scalar numerical reward for the
previous action, and then the agent then selects an action.
Reinforcement learning is a technique for solving Markov decision problems.

Reinforcement learning uses a formal framework defining the interaction between a


learning agent and its environment in terms of states, actions, and rewards. This framework is
intended to be a simple way of representing essential features of the artificial intelligence
problem.
Application of Reinforcement Learnings
1. Robotics: Robots with pre-programmed behavior are useful in structured environments, such
as the assembly line of an automobile manufacturing plant, where the task is repetitive in
nature.
2. A master chess player makes a move. The choice is informed both by planning, anticipating
possible replies and counter replies.
3. An adaptive controller adjusts parameters of a petroleum refinery’s operation in real time.
RL can be used in large environments in the following situations:

1. A model of the environment is known, but an analytic solution is not available;


2. Only a simulation model of the environment is given (the subject of simulation-
based optimization)
3. The only way to collect information about the environment is to interact with it

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

2.Explain about Guassian Mixture Model


A.Suppose there are K clusters (For the sake of simplicity here it is assumed that the number of
clusters is known and it is K). So and are also estimated for each k. Had it been only one
distribution, they would have been estimated by the maximum-likelihood method. But since
there are K such clusters and the probability density is defined as a linear function of densities
of all these K distributions, i.e.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

Real-Life Examples of Gaussian mixture models


Gaussian mixture models (GMMs) as already stated above are statistical models that can be used
to represent the probability distribution of a multi-dimensional continuous variable as a weighted
sum of multiple multivariate normal distributions. GMMs are often used in a variety of
applications, including clustering, density estimation, and anomaly detection. Here are a few
examples of how GMMs could be used in real life:
 Clustering: GMMs can be used to identify patterns and group similar observations
together. For example, a GMM could be used to cluster customers into different
segments based on their purchase history and demographic data.
 Density estimation: GMMs can be used to estimate the probability density function
(PDF) of a given dataset. This can be useful for tasks such as density-based anomaly
detection, where GMMs can be used to identify observations that are significantly
different from the rest of the data.
 Anomaly detection: GMMs can be used to detect anomalous observations in a
dataset. For example, a GMM could be trained on normal network traffic data, and
then used to identify unusual traffic patterns that may indicate an intrusion attempt.
 Speech recognition: GMMs are often used in speech recognition systems to model
the probability distribution of speech sounds (phonemes). This allows the system to
identify the most likely sequence of phonemes given an input audio signal.
 Computer vision: GMMs can be used in computer vision applications to model the
appearance of objects in an image. For example, a GMM could be used to model the
appearance of different types of vehicles in a traffic surveillance system.
Advantages of Gaussian Mixture Models
 Flexibility- Gaussian Mixture Models have the ability to model a wide range of
probability distributions, as they can approximate any distribution that can be
represented as a weighted sum of multiple normal distributions. Hence, very flexible
in nature.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

 Robustness- Gaussian Mixture Models are relatively robust to the outliers which are
present in the data, as they can accommodate the presence of multiple modes called
“peaks” in the distribution.
 Speed- Gaussian Mixture Models are relatively fast to fit a dataset, especially when
using an efficient optimization algorithm such as the expectation-maximization (EM)
algorithm.
Disadvantages of Gaussian Mixture Models
There are a few drawbacks to using Gaussian Mixture Models which are stated below:
 Sensitivity To Initialization- Gaussian Mixture Models can be sensitive to the initial
values of the model parameters, especially when there are too many components in the
mixture. This can sometimes lead to poor convergence to the true maximum likelihood
solution.
 Assumption Of Normality- Gaussian Mixture Models assume that the data are
generated from a mixture of normal distributions, which may not always be the case in
practice. If the data deviate significantly from normality, GMMs may not be the most
appropriate model.
 Number Of Components- Choosing the appropriate number of components in a
Gaussian Mixture Model can be challenging, as adding too many components may
overfit the data, while using too few components may underfit the data. The extremes
of both points result in a challenging task, which becomes tough to be handled.

3.Explain About Exceptation Maximization (EM) Algorithm


Expectation-Maximization (EM) Algorithm
The Expectation-Maximization (EM) algorithm is an iterative optimization method that
combines different unsupervised machine learning algorithms to find maximum likelihood or
maximum posterior estimates of parameters in statistical models that involve unobserved latent
variables. The EM algorithm is commonly used for latent variable models and can handle
missing data. It consists of an estimation step (E-step) and a maximization step (M-step),
forming an iterative process to improve model fit.
 In the E step, the algorithm computes the latent variables i.e. expectation of the log-
likelihood using the current parameter estimates.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

 In the M step, the algorithm determines the parameters that maximize the expected
log-likelihood obtained in the E step, and corresponding model parameters are
updated based on the estimated latent variables.

By iteratively repeating these steps, the EM algorithm seeks to maximize the likelihood of the
observed data. It is commonly used for unsupervised learning tasks, such as clustering, where
latent variables are inferred and has applications in various fields, including machine learning,
computer vision, and natural language processing.
Key Terms in Expectation-Maximization (EM) Algorithm
Some of the most commonly used key terms in the Expectation-Maximization (EM) Algorithm
are as follows:
 Latent Variables: Latent variables are unobserved variables in statistical models
that can only be inferred indirectly through their effects on observable variables.
They cannot be directly measured but can be detected by their impact on the
observable variables.
 Likelihood: It is the probability of observing the given data given the parameters of
the model. In the EM algorithm, the goal is to find the parameters that maximize the
likelihood.
 Log-Likelihood: It is the logarithm of the likelihood function, which measures the
goodness of fit between the observed data and the model. EM algorithm seeks to
maximize the log-likelihood.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

 Maximum Likelihood Estimation (MLE): MLE is a method to estimate the


parameters of a statistical model by finding the parameter values that maximize the
likelihood function, which measures how well the model explains the observed data.
 Posterior Probability: In the context of Bayesian inference, the EM algorithm can
be extended to estimate the maximum a posteriori (MAP) estimates, where the
posterior probability of the parameters is calculated based on the prior distribution
and the likelihood function.
 Expectation (E) Step: The E-step of the EM algorithm computes the expected
value or posterior probability of the latent variables given the observed data and
current parameter estimates. It involves calculating the probabilities of each latent
variable for each data point.
 Maximization (M) Step: The M-step of the EM algorithm updates the parameter
estimates by maximizing the expected log-likelihood obtained from the E-step. It
involves finding the parameter values that optimize the likelihood function,
typically through numerical optimization methods.
 Convergence: Convergence refers to the condition when the EM algorithm has
reached a stable solution. It is typically determined by checking if the change in the
log-likelihood or the parameter estimates falls below a predefined threshold
 How Expectation-Maximization (EM) Algorithm Works:
 The essence of the Expectation-Maximization algorithm is to use the available observed
data of the dataset to estimate the missing data and then use that data to update the
values of the parameters. Let us understand the EM algorithm in detail.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

1. Initialization:
 Initially, a set of initial values of the parameters are considered. A set of
incomplete observed data is given to the system with the assumption that
the observed data comes from a specific model.
2. E-Step (Expectation Step): In this step, we use the observed data in order to
estimate or guess the values of the missing or incomplete data. It is basically used to
update the variables.
 Compute the posterior probability or responsibility of each latent
variable given the observed data and current parameter estimates.
 Estimate the missing or incomplete data values using the current
parameter estimates.
 Compute the log-likelihood of the observed data based on the current
parameter estimates and estimated missing data.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

3. M-step (Maximization Step): In this step, we use the complete data generated in
the preceding “Expectation” – step in order to update the values of the parameters.
It is basically used to update the hypothesis.
 Update the parameters of the model by maximizing the expected
complete data log-likelihood obtained from the E-step.
 This typically involves solving optimization problems to find the
parameter values that maximize the log-likelihood.
 The specific optimization technique used depends on the nature of the
problem and the model being used.
4. Convergence: In this step, it is checked whether the values are converging or not, if
yes, then stop otherwise repeat step-2 and step-3 i.e. “Expectation” – step and
“Maximization” – step until the convergence occurs.
 Check for convergence by comparing the change in log-likelihood or the
parameter values between iterations.
 If the change is below a predefined threshold, stop and consider the
algorithm converged.
 Otherwise, go back to the E-step and repeat the process until
convergence is achieved.
Applications of the EM algorithm
 It can be used to fill in the missing data in a sample.
 It can be used as the basis of unsupervised learning of clusters.
 It can be used for the purpose of estimating the parameters of the Hidden Markov
Model (HMM).
 It can be used for discovering the values of latent variables.
Advantages of EM algorithm
 It is always guaranteed that likelihood will increase with each iteration.
 The E-step and M-step are often pretty easy for many problems in terms of
implementation.
 Solutions to the M-steps often exist in the closed form.
Disadvantages of EM algorithm
 It has slow convergence.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

 It makes convergence to the local optima only.


 It requires both the probabilities, forward and backward (numerical optimization
requires only forward probability).
4.Explain Soft clustering in GMM’S

Soft clustering, often associated with Gaussian Mixture Models (GMMs), is a probabilistic
approach to clustering where data points are not assigned exclusively to a single cluster, but
rather, each point has a probability distribution over all clusters. GMMs are a type of
probabilistic model that assumes the data is generated by a mixture of several Gaussian
distributions.
Here's a breakdown of how soft clustering works in the context of GMMs:
1. Gaussian Mixture Model (GMM): A GMM represents the data as a combination of
multiple Gaussian distributions. Each Gaussian distribution is associated with a cluster,
and the overall data distribution is a weighted sum of these Gaussian components.
2. Probability Distributions: Instead of assigning each data point to a single cluster,
GMMs provide a probability distribution for each point across all clusters. The
probability that a data point belongs to a particular cluster is given by the weight of the
corresponding Gaussian component.
3. Parameters of the Model:
 Means (μ): Represent the center of each Gaussian distribution.
 Covariances (Σ): Indicate the shape and orientation of each Gaussian.
 Weights (π): Reflect the importance of each Gaussian component in the overall
mixture.
4. Expectation-Maximization (EM) Algorithm:
 E-step (Expectation): Calculate the probability that each data point belongs to
each cluster. This step involves computing the responsibility of each cluster for
each data point using Bayes' theorem.
 M-step (Maximization): Update the parameters (means, covariances, and
weights) of the Gaussian components based on the responsibilities computed in
the E-step.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

5. Soft Assignment: After running the EM algorithm, you obtain a set of probabilities for
each data point across all clusters. These probabilities reflect the likelihood that a point
belongs to each cluster.
The soft clustering property of GMMs is beneficial when the boundaries between clusters are not
well-defined. It allows for a more nuanced representation of the data, capturing the uncertainty or
overlap between clusters. Soft clustering is particularly useful when dealing with complex data
distributions where instances may exhibit characteristics of multiple clusters simultaneously.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

5.Explain Temporal Difference Learning


Temporal Difference Learning in reinforcement learning is an unsupervised learning technique
that is very commonly used in it for the purpose of predicting the total reward expected over the
future. They can, however, be used to predict other quantities as well. It is essentially a way to
learn how to predict a quantity that is dependent on the future values of a given signal. It is a
method that is used to compute the long-term utility of a pattern of behaviour from a series of
intermediate rewards.
Essentially, Temporal Difference Learning (TD Learning) focuses on predicting a variable's
future value in a sequence of states. Temporal difference learning was a major breakthrough in
solving the problem of reward prediction. You could say that it employs a mathematical trick
that allows it to replace complicated reasoning with a simple learning procedure that can be used
to generate the very same results.
The trick is that rather than attempting to calculate the total future reward, temporal difference
learning just attempts to predict the combination of immediate reward and its own reward
prediction at the next moment in time. Now when the next moment comes and brings fresh
information with it, the new prediction is compared with the expected prediction. If these two
predictions are different from each other, the Temporal Difference Learning algorithm will
calculate how different the predictions are from each other and make use of this temporal
difference to adjust the old prediction toward the new prediction.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

The temporal difference algorithm always aims to bring the expected prediction and the new
prediction together, thus matching expectations with reality and gradually increasing the
accuracy of the entire chain of prediction.
Temporal Difference Learning aims to predict a combination of the immediate reward and
its own reward prediction at the next moment in time.
In TD Learning, the training signal for a prediction is a future prediction. This method is a
combination of the Monte Carlo (MC) method and the Dynamic Programming (DP) method.
Monte Carlo methods adjust their estimates only after the final outcome is known, but
temporal difference methods tend to adjust predictions to match later, more accurate,
predictions for the future, much before the final outcome is clear and know. This is
essentially a type of bootstrapping.
Temporal difference learning in machine learning got its name from the way it uses changes,
or differences, in predictions over successive time steps for the purpose of driving the
learning process.
The prediction at any particular time step gets updated to bring it nearer to the prediction of
the same quantity at the next time step.
parameters temporal difference learning?

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

Alpha (α): learning rate


It shows how much our estimates should be adjusted, based on the error. This rate varies
between 0 and 1.
Gamma (γ): the discount rate
This indicates how much future rewards are valued. A larger discount rate signifies that
future rewards are valued to a greater extent. The discount rate also varies between 0 and 1.
e: the ratio reflective of exploration vs. exploitation.
This involves exploring new options with probability e and staying at the current max with
probability 1-e. A larger e signifies that more exploration is carried out during training.
6.Explain about dynamic programming in solution methods and applications
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the
result for future purposes so that we do not need to compute the result again. The subproblems
are optimized to optimize the overall solution is known as optimal substructure property. The
main use of dynamic programming is to solve optimization problems. Here, optimization
problems mean that when we are trying to find out the minimum or the maximum solution of a
problem. The dynamic programming guarantees to find the optimal solution of a problem if the
solution exists.
The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each subproblem just
once, and then storing their solutions to avoid repetitive computations.
Dynamic programming is a powerful optimization technique used to solve problems that can be
broken down into overlapping subproblems. It involves solving each subproblem only once and
storing the solutions to avoid redundant computations. This technique is particularly useful for
optimization problems where the goal is to find the best solution among a set of feasible
solutions.
Here are key elements and applications of dynamic programming:
Key Elements of Dynamic Programming:
1. Optimal Substructure:
 Dynamic programming problems exhibit optimal substructure, meaning that an
optimal solution to the overall problem can be constructed from optimal solutions
of its subproblems.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

2. Overlapping Subproblems:
 The problem can be broken down into smaller, overlapping subproblems. Solving
these subproblems independently and combining their solutions leads to an
optimal solution for the overall problem.
3. Memoization:
 To avoid redundant computations, dynamic programming often involves
memoization, which is a technique of storing and reusing solutions to
subproblems in a table or cache.
4. Bottom-Up or Top-Down Approach:
 Dynamic programming solutions can be implemented using either a bottom-up
approach, starting from the smallest subproblems and building up to the overall
problem, or a top-down approach, solving the problem by recursively breaking it
into smaller subproblems.
Applications of Dynamic Programming:
1. Fibonacci Sequence:
 One of the classic examples of dynamic programming is computing Fibonacci
numbers efficiently using memoization to avoid redundant calculations.
2. Shortest Path Problems:
 Algorithms like Dijkstra's and Floyd-Warshall for finding the shortest paths in
graphs use dynamic programming principles. These algorithms avoid
recomputing paths by storing solutions to subproblems.
3. Longest Common Subsequence (LCS):
 Dynamic programming is employed to find the longest common subsequence
between two sequences. This has applications in bioinformatics, text comparison,
and version control systems.
4. Knapsack Problem:
 The 0/1 Knapsack problem, where items have weights and values, and the goal is
to maximize the total value without exceeding a given weight limit, can be
efficiently solved using dynamic programming.
5. Matrix Chain Multiplication:

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

 Dynamic programming is applied to find the most efficient way to multiply a


sequence of matrices, minimizing the total number of scalar multiplications
required.
6. Coin Change Problem:
 Dynamic programming can be used to find the minimum number of coins needed
to make change for a given amount, considering a set of coin denominations.
7. Resource Allocation Problems:
 Dynamic programming is employed in various resource allocation problems, such
as project scheduling, to optimize the allocation of resources over time.
8. Optimal Binary Search Trees:
 In the construction of optimal binary search trees, where the goal is to minimize
the expected search time, dynamic programming is applied to find the optimal
structure.
How does the dynamic programming approach work?
The following are the steps that the dynamic programming follows:
o It breaks down the complex problem into simpler subproblems.
o It finds the optimal solution to these sub-problems.
o It stores the results of subproblems (memoization). The process of storing the results of
subproblems is known as memorization.
o It reuses them so that same sub-problem is calculated more than once.
o Finally, calculate the result of the complex problem.
The above five steps are the basic steps for dynamic programming. The dynamic programming is
applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures. Here,
optimal substructure means that the solution of optimization problems can be obtained by simply
combining the optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are storing
the intermediate results, but the time complexity would be decreased.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

7.Describe Instance Based learning with advantages and disadvantages


The Machine Learning systems which are categorized as instance-based learning are the
systems that learn the training examples by heart and then generalizes to new instances
based on some similarity measure. It is called instance-based because it builds the
hypotheses from the training instances. It is also known as memory-based
learning or lazy-learning (because they delay processing until a new instance must be
classified). The time complexity of this algorithm depends upon the size of training data.
Each time whenever a new query is encountered, its previously stores data is examined.
And assign to a target function value for the new instance.
The worst-case time complexity of this algorithm is O (n), where n is the number of
training instances. For example, If we were to create a spam filter with an instance-based
learning algorithm, instead of just flagging emails that are already marked as spam emails,
our spam filter would be programmed to also flag emails that are very similar to them.
This requires a measure of resemblance between two emails. A similarity measure
between two emails could be the same sender or the repetitive use of the same keywords
or something else.

Instance-based learning, also known as instance-based methods or memory-based learning,


is a type of machine learning approach where the system learns directly from specific
examples in the training data. Instead of constructing a general model that represents the
entire training set, instance-based learning makes decisions based on the similarity between
new instances and instances in the training set. This approach is often associated with lazy
learning, as the model delays the processing of instances until a prediction is needed.
Advantages:
1. Instead of estimating for the entire instance set, local approximations can be made
to the target function.
2. This algorithm can adapt to new data easily, one which is collected as we go .
.Flexibility and Adaptability: Instance-based learning is highly adaptable to changes in
the training data. It can quickly incorporate new instances without requiring a full
retraining of the model. This makes it suitable for dynamic and evolving datasets.

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

Complex Decision Boundaries: It can capture complex decision boundaries, especially


in cases where the relationships between features are intricate. Instance-based methods
can handle non-linearities well and are effective in scenarios where traditional models
might struggle.
No Model Training Phase: Since instance-based learning doesn't involve a separate
training phase to build a global model, the training process is relatively simple and quick.
The model "learns" by memorizing the instances in the training set.
Transparent Decision Making: The decision-making process in instance-based learning
is often transparent and interpretable. Predictions are made based on the similarity
between the new instance and instances in the training set, making it easier to understand
the model's reasoning.

Disadvantages:
1. Classification costs are high
2. Large amount of memory required to store the data, and each query involves
starting the identification of a local model from scratch.
Overfitting: Since instance-based learning essentially memorizes the training data, it can
be prone to overfitting. If the training set contains noise or outliers, the model may
perform poorly on new, unseen data.
Lack of Generalization: Instance-based learning may struggle to generalize well to
instances outside the training set. It tends to be more focused on individual instances,
which can limit its ability to capture broader patterns in the data.

Some of the instance-based learning algorithms are :


1. K Nearest Neighbor (KNN)
2. Self-Organizing Map (SOM)
3. Learning Vector Quantization (LVQ)
4. Locally Weighted Learning (LWL)
5. Case-Based Reasoning

MODULE V DEPARTMENT OF ECE


INTRODUCTION TO ML

8(a).

8(b).
Given 100 hypothesis functions, each trained with 106 samples, what is the lower bound on the
probability that there does not exist a hypothesis function with error greater than 0.1?

MODULE V DEPARTMENT OF ECE

You might also like