Lecture 1-Unit 3.3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Unit 2

1. **Generalized Delta Rule and Updating of Hidden Layer and Output Layer:**
- The generalized delta rule is an extension of the delta rule used in backpropagation. It involves
adjusting weights based on the gradient of the error with respect to the weights. In a neural network,
weights in both the hidden and output layers are updated using this rule during the training process.

2. **Various Applications of Neural Networks:**


- Applications include image and speech recognition, natural language processing, financial
forecasting, medical diagnosis, autonomous vehicles, and game playing.

3. **Activation Functions Used in Single and Multilayer Networks:**


- Single Layer: Step function, sigmoid function, hyperbolic tangent (tanh).
- Multilayer: ReLU (Rectified Linear Unit), Leaky ReLU, Sigmoid, and tanh.

4. **Loss Function in Neural Networks:**


- The loss function measures the difference between predicted and actual outputs, quantifying the
model's performance. It guides the optimization algorithm during training.

5. **Concept of Cost Function in Machine Learning:**


- The cost function is a broader term that includes the loss function and any additional regularization
terms. It represents the overall error a model makes on the entire training dataset.

6. **Distinguish between Hopfield and Iterative Auto-Associative Networks:**


- Hopfield networks are recurrent and used for content addressable memory. Iterative auto-
associative networks, while also recurrent, may involve various activation functions and architectures
for associative memory tasks.

7. **Linearly Separable Problems:**


- Problems where a single straight line or hyperplane can separate classes in the input space are
considered linearly separable.

8. **Compare LSTM and Gated Recurrent Units (GRU):**


- Both LSTM and GRU are types of recurrent neural networks. LSTM has separate memory cells and
gates for input, output, and forget, providing better long-term memory. GRU has a simpler structure
with a single gate, making it computationally less expensive.

9. **Long Short-Term Memory (LSTM) and the Vanishing Gradient Problem:**


- LSTM is a type of recurrent neural network designed to address the vanishing gradient problem by
allowing the network to learn long-term dependencies. It achieves this through its memory cell and
various gates that regulate information flow.

10. **Linearization of Sigmoid Function:**


- Linearization of the sigmoid function involves approximating it as a straight line near its midpoint.
This is often done using the first-order Taylor expansion, resulting in a linear function.

11. **Stateful vs. Stateless LSTMs:**


- Stateful LSTMs retain the hidden state between batches during training, while stateless LSTMs
reset the hidden state for each batch. Stateful LSTMs can capture longer dependencies but may
require careful management of state.

12. **Recurrent Neural Networks (RNNs):**


- RNNs are a type of neural network architecture with connections between nodes forming a
directed cycle. They are designed to handle sequential data and are used in tasks like time series
prediction and natural language processing.
13. **Associative Memory:**
- Associative memory refers to the ability of a network to recall patterns based on partial or noisy
input, allowing it to retrieve associated information.

14. **Significance of Convolutional Layer and Pooling Layer:**


- Convolutional layers detect spatial patterns in input data, while pooling layers reduce
dimensionality and retain important information, improving the efficiency of convolutional neural
networks (CNNs).

15. **Bidirectional Associative Memory:**


- Bidirectional Associative Memory (BAM) is a type of neural network where connections between
neurons are bidirectional, allowing the network to associate patterns in both directions.

16. **Distinguish between Recurrent and Non-Recurrent Networks:**


- Recurrent networks have connections forming cycles, allowing them to capture sequential
dependencies. Non-recurrent networks lack these cyclic connections and are typically used for tasks
where sequence information is less critical.

17. **Hopfield Memory:**


- Hopfield memory is a type of recurrent neural network used for associative memory tasks. It
stores and recalls patterns by adjusting weights based on the Hebbian learning rule.

18. **Auto-Associative Net vs. Hopfield Net:**


- Auto-associative nets are a broader class, while Hopfield nets are a specific type used for
associative memory. Hopfield nets have symmetric weights and are designed to converge to stable
states.

19. **Self-Organization:**
- Self-organization refers to the ability of neural networks to learn and adapt without explicit
programming. It is a key feature in unsupervised learning tasks.

20. **Binary vs. Bipolar Sigmoid Function:**


- Binary sigmoid function outputs 0 or 1, while bipolar sigmoid function outputs -1 or 1. Bipolar
sigmoid is often preferred in neural networks for improved symmetry around zero.

21. **Convolution Layer:**


- The convolution layer in a neural network performs feature extraction by applying filters to input
data, capturing spatial hierarchies and patterns.

22. **Applications of CNN:**


- Applications include image recognition, object detection, video analysis, medical image analysis,
and natural language processing.

23. **Denoising Autoencoders:**


- Denoising autoencoders are trained to reconstruct clean input from noisy data, helping in feature
learning and data denoising.

24. **Sparse Autoencoders:**


- Sparse autoencoders introduce sparsity in the hidden layer, promoting the learning of robust
features and improving generalization.

25. **Suitability of Activation Functions:**


- Sigmoid for binary classification, tanh for zero-centered data, ReLU for most hidden layers due to
its simplicity and effectiveness.

26. **Operations of Pooling Layer in CNN:**


- The pooling layer reduces spatial dimensions, downsampling the input by selecting the maximum
or average value in each local region. For example, max pooling selects the maximum value in each
region.

27. **Advantages of Autoencoders over PCA for Dimensionality Reduction:**


- Autoencoders can learn non-linear representations, capturing more complex patterns than linear
methods like PCA. They are more suitable for high-dimensional data.

28. **Working of Gated Recurrent Unit (GRU):**


- GRU is a type of recurrent neural network with a simplified structure compared to LSTM. It uses a
gating mechanism to control the flow of information, allowing it to capture dependencies in
sequential data.

29. **Graphical Representation of Sigmoid Activation Function:**


- Graphically, the sigmoid function is an S-shaped curve that maps input values to output values
between 0 and 1.

30. **Significance of Sigmoid Activation Function:**


- Sigmoid activation is often used in the output layer of binary classification models, mapping the
network's output to a probability between 0 and 1.

31. **Graphical Representation of Different Activation Functions:**


- Graphs include the sigmoid, tanh, ReLU, and softmax functions, illustrating their characteristics
and non-linearities.

32. **Auto-Associative vs. Heteroassociative Memory:**


- Auto-associative memory recalls patterns from partial or noisy inputs. Heteroassociative memory
associates different patterns, mapping one set to another.

33. **Rectified Linear Units (ReLU) and Generalized Form:**


-

ReLU activation function introduces non-linearity in hidden layers by outputting the input for positive
values and zero for negative values. Its generalized form includes variations like Leaky ReLU.

34. **Differences between ReLU and Tanh Activation Functions:**


- ReLU outputs zero for negative values, while tanh squashes inputs to the range [-1, 1]. ReLU is
computationally less expensive.

35. **Algorithm of Discrete Hopfield Network and Its Architecture:**


- The algorithm involves updating neurons based on the sign of the dot product of weights and
inputs. The architecture includes symmetric weights and asynchronous updating.

36. **Role of Rectified Linear Units in Hidden Layers:**


- ReLU introduces non-linearity and sparsity in hidden layers, aiding the learning of complex
features and speeding up training.

37. **Characteristics of Continuous Hopfield Network:**


- Continuous Hopfield networks use continuous activation values and continuous update rules. They
are employed in optimization and associative memory tasks.

38. **Encoder–Decoder Sequence-to-Sequence Architecture:**


- This architecture involves an encoder network to process input data and a decoder network to
generate the output sequence. It is used in tasks like machine translation and text summarization.

You might also like