In this paper we consider a version of the functional Hodrick-Prescott filter for functional time... more In this paper we consider a version of the functional Hodrick-Prescott filter for functional time series. We show that the associated optimal smoothing operator preserves the 'noise-to-signal' structure. Moreover, we propose a consistent estimator of this optimal smoothing operator.
The idea of federated learning is to train deep neural network models collaboratively and share t... more The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.
The idea of federated learning is to train deep neural network models collaboratively and share t... more The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.
Neurips 2021 workshop-New Frontiers in Federated Learning:<br> Privacy, Fairness, Robustnes... more Neurips 2021 workshop-New Frontiers in Federated Learning:<br> Privacy, Fairness, Robustness, Personalization and Data Ownership
The input data from a neural network may be reconstructed using knowledge of the gradients of tha... more The input data from a neural network may be reconstructed using knowledge of the gradients of that network, as demonstrated by <cit.>. By imposing prior information and utilising a uniform initialization we demonstrate faster and more accurate image reconstruction. Exploring the theoretical limits of reconstruction, we show that a single input may be reconstructed, regardless of network depth using a fully-connected neural network with one hidden node. Then we generalize this result to a gradient averaged over mini-batches of size B. In this case, the full mini-batch can be reconstructed if the number of hidden units exceeds B, with an orthogonality regularizer to improve the precision. For a Convolutional Neural Network, the required number of filters in the first convolutional layer is decided by multiple factors (e.g., padding, kernel and stride size). Therefore, we require the number of filters, h≥ (d/d^')^2C, where d is input width, d^' is output width after convo...
A new efficient orthogonalization of the B-spline basis is proposed and contrasted with some prev... more A new efficient orthogonalization of the B-spline basis is proposed and contrasted with some previous orthogonalized methods. The resulting orthogonal basis of splines is best visualized as a net of functions rather than a sequence of them. For this reason, the basis is referred to as a splinet. The splinets feature clear advantages over other spline bases. They efficiently exploit 'near-orthogonalization' featured by the B-splines and gains are achieved at two levels: locality that is exhibited through small size of the total support of a splinet and computational efficiency that follows from a small number of orthogonalization procedures needed to be performed on the B-splines to achieve orthogonality. These efficiencies are formally proven by showing the asymptotic rates with respect to the number of elements in a splinet. The natural symmetry of the B-splines in the case of the equally spaced knots is preserved in the splinets, while quasi-symmetrical features are also s...
An important issue in finance is model calibration. The calibration problem is the inverse of the... more An important issue in finance is model calibration. The calibration problem is the inverse of the option pricing problem. Calibration is performed on a set of option prices generated from a given e ...
The Hodrick-Prescott filter was introduced to reconstruct a trend based on noisy claims data. Thi... more The Hodrick-Prescott filter was introduced to reconstruct a trend based on noisy claims data. This filter is used in many fields of application, from geophysics to medical image processing.Hodrick-Prescott filter depends on a smoothing operator in Hilbert space. In this paper, a generalization of Hodrick-Prescott in Hilbert space is done and the main result is to choose the best smoothing operator under two cases: compact and non-compact operators.
We study a version of the functional Ho drick-Prescott filter wherethe asso ciated op erator is n... more We study a version of the functional Ho drick-Prescott filter wherethe asso ciated op erator is not necessarily compact, but merely closedand densely defined with closed range. We show that the asso c iate doptimal smo othing op erator preserves the structure obtained in thecompact case, when the underlying distribution of the data is Gaussian.
The problem of orthogonalization of the B-spline basis is discussed for both equally and arbitrar... more The problem of orthogonalization of the B-spline basis is discussed for both equally and arbitrarily spaced knots. A new efficient orthogonalization is proposed and contrasted with some previous methods. This new orthogonal basis of the splines is better visualized as a net of orthogonalized functions rather than a sequence of them. The net is spread over different support rangeution and different locations resembling in this respect wavelets bases. For this reason the constructed basis is referred to as a splinet and features some clear advantages over other spline bases. The splinets exploit nearly orthogonalization featured by the B-splines themselves and through this gains are achieved at two levels: a locality that is exhibited through small size of the total support of a splinet and computational efficiency that follows from a small number of orthogonalization procedures needed to be performed on the B-splines to reach orthogonality. The original not orthogonalized B-splines h...
In implementations of the functional data methods, the effect of the initial choice of an orthono... more In implementations of the functional data methods, the effect of the initial choice of an orthonormal basis has not gained much attention in the past. Typically, several standard bases such as Fourier, wavelets, splines, etc. are considered to transform observed functional data and a choice is made without any formal criteria indicating which of the bases is preferable for the initial transformation of the data into functions. In an attempt to address this issue, we propose a strictly datadriven method of orthogonal basis selection. The method uses recently introduced orthogonal spline bases called the splinets obtained by efficient orthogonalization of the B-splines. The algorithm learns from the data in the machine learning style to efficiently place knots. The optimality criterion is based on the average (per functional data point) mean square error and is utilized both in the learning algorithms and in comparison studies. The latter indicates efficiency that is particularly evid...
In this paper we consider a version of the functional Hodrick-Prescott filter for functional time... more In this paper we consider a version of the functional Hodrick-Prescott filter for functional time series. We show that the associated optimal smoothing operator preserves the 'noise-to-signal' structure. Moreover, we propose a consistent estimator of this optimal smoothing operator.
We propose a functional version of the Hodrick-Prescott filter for functional data which take val... more We propose a functional version of the Hodrick-Prescott filter for functional data which take values in an infinite dimensional separable Hilbert space. We further characterize the associated optimal smoothing parameter when the associated linear operator is compact and the underlying distribution of the data is Gaussian.
We propose a functional version of the Hodrick–Prescott filter for functional data which take val... more We propose a functional version of the Hodrick–Prescott filter for functional data which take values in an infinite-dimensional separable Hilbert space. We further characterize the associated optimal smoothing operator when the associated linear operator is compact and the underlying distribution of the data is Gaussian.
We study a version of the functional Hodrick–Prescott filter in the case when the associated oper... more We study a version of the functional Hodrick–Prescott filter in the case when the associated operator is not necessarily compact but merely closed and densely defined with closed range. We show that the associated optimal smoothing operator preserves the structure obtained in the compact case when the underlying distribution of the data is Gaussian.
In this paper we consider a version of the functional Hodrick-Prescott filter for functional time... more In this paper we consider a version of the functional Hodrick-Prescott filter for functional time series. We show that the associated optimal smoothing operator preserves the 'noise-to-signal' structure. Moreover, we propose a consistent estimator of this optimal smoothing operator.
The idea of federated learning is to train deep neural network models collaboratively and share t... more The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.
The idea of federated learning is to train deep neural network models collaboratively and share t... more The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.
Neurips 2021 workshop-New Frontiers in Federated Learning:<br> Privacy, Fairness, Robustnes... more Neurips 2021 workshop-New Frontiers in Federated Learning:<br> Privacy, Fairness, Robustness, Personalization and Data Ownership
The input data from a neural network may be reconstructed using knowledge of the gradients of tha... more The input data from a neural network may be reconstructed using knowledge of the gradients of that network, as demonstrated by <cit.>. By imposing prior information and utilising a uniform initialization we demonstrate faster and more accurate image reconstruction. Exploring the theoretical limits of reconstruction, we show that a single input may be reconstructed, regardless of network depth using a fully-connected neural network with one hidden node. Then we generalize this result to a gradient averaged over mini-batches of size B. In this case, the full mini-batch can be reconstructed if the number of hidden units exceeds B, with an orthogonality regularizer to improve the precision. For a Convolutional Neural Network, the required number of filters in the first convolutional layer is decided by multiple factors (e.g., padding, kernel and stride size). Therefore, we require the number of filters, h≥ (d/d^')^2C, where d is input width, d^' is output width after convo...
A new efficient orthogonalization of the B-spline basis is proposed and contrasted with some prev... more A new efficient orthogonalization of the B-spline basis is proposed and contrasted with some previous orthogonalized methods. The resulting orthogonal basis of splines is best visualized as a net of functions rather than a sequence of them. For this reason, the basis is referred to as a splinet. The splinets feature clear advantages over other spline bases. They efficiently exploit 'near-orthogonalization' featured by the B-splines and gains are achieved at two levels: locality that is exhibited through small size of the total support of a splinet and computational efficiency that follows from a small number of orthogonalization procedures needed to be performed on the B-splines to achieve orthogonality. These efficiencies are formally proven by showing the asymptotic rates with respect to the number of elements in a splinet. The natural symmetry of the B-splines in the case of the equally spaced knots is preserved in the splinets, while quasi-symmetrical features are also s...
An important issue in finance is model calibration. The calibration problem is the inverse of the... more An important issue in finance is model calibration. The calibration problem is the inverse of the option pricing problem. Calibration is performed on a set of option prices generated from a given e ...
The Hodrick-Prescott filter was introduced to reconstruct a trend based on noisy claims data. Thi... more The Hodrick-Prescott filter was introduced to reconstruct a trend based on noisy claims data. This filter is used in many fields of application, from geophysics to medical image processing.Hodrick-Prescott filter depends on a smoothing operator in Hilbert space. In this paper, a generalization of Hodrick-Prescott in Hilbert space is done and the main result is to choose the best smoothing operator under two cases: compact and non-compact operators.
We study a version of the functional Ho drick-Prescott filter wherethe asso ciated op erator is n... more We study a version of the functional Ho drick-Prescott filter wherethe asso ciated op erator is not necessarily compact, but merely closedand densely defined with closed range. We show that the asso c iate doptimal smo othing op erator preserves the structure obtained in thecompact case, when the underlying distribution of the data is Gaussian.
The problem of orthogonalization of the B-spline basis is discussed for both equally and arbitrar... more The problem of orthogonalization of the B-spline basis is discussed for both equally and arbitrarily spaced knots. A new efficient orthogonalization is proposed and contrasted with some previous methods. This new orthogonal basis of the splines is better visualized as a net of orthogonalized functions rather than a sequence of them. The net is spread over different support rangeution and different locations resembling in this respect wavelets bases. For this reason the constructed basis is referred to as a splinet and features some clear advantages over other spline bases. The splinets exploit nearly orthogonalization featured by the B-splines themselves and through this gains are achieved at two levels: a locality that is exhibited through small size of the total support of a splinet and computational efficiency that follows from a small number of orthogonalization procedures needed to be performed on the B-splines to reach orthogonality. The original not orthogonalized B-splines h...
In implementations of the functional data methods, the effect of the initial choice of an orthono... more In implementations of the functional data methods, the effect of the initial choice of an orthonormal basis has not gained much attention in the past. Typically, several standard bases such as Fourier, wavelets, splines, etc. are considered to transform observed functional data and a choice is made without any formal criteria indicating which of the bases is preferable for the initial transformation of the data into functions. In an attempt to address this issue, we propose a strictly datadriven method of orthogonal basis selection. The method uses recently introduced orthogonal spline bases called the splinets obtained by efficient orthogonalization of the B-splines. The algorithm learns from the data in the machine learning style to efficiently place knots. The optimality criterion is based on the average (per functional data point) mean square error and is utilized both in the learning algorithms and in comparison studies. The latter indicates efficiency that is particularly evid...
In this paper we consider a version of the functional Hodrick-Prescott filter for functional time... more In this paper we consider a version of the functional Hodrick-Prescott filter for functional time series. We show that the associated optimal smoothing operator preserves the 'noise-to-signal' structure. Moreover, we propose a consistent estimator of this optimal smoothing operator.
We propose a functional version of the Hodrick-Prescott filter for functional data which take val... more We propose a functional version of the Hodrick-Prescott filter for functional data which take values in an infinite dimensional separable Hilbert space. We further characterize the associated optimal smoothing parameter when the associated linear operator is compact and the underlying distribution of the data is Gaussian.
We propose a functional version of the Hodrick–Prescott filter for functional data which take val... more We propose a functional version of the Hodrick–Prescott filter for functional data which take values in an infinite-dimensional separable Hilbert space. We further characterize the associated optimal smoothing operator when the associated linear operator is compact and the underlying distribution of the data is Gaussian.
We study a version of the functional Hodrick–Prescott filter in the case when the associated oper... more We study a version of the functional Hodrick–Prescott filter in the case when the associated operator is not necessarily compact but merely closed and densely defined with closed range. We show that the associated optimal smoothing operator preserves the structure obtained in the compact case when the underlying distribution of the data is Gaussian.
Uploads
Papers by hiba nassar