Construction of NN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Construction_of_NN

April 29, 2024

Dataset - Boston Housing Data


[1]: # shrinivas_2206
# Deep Learning Tutorial

# https://www.tensorflow.org/api.docs/python/tf/keras/datasets/boston_housing
# http://lib.cmu.edu/datasets/boston

[2]: # importing tensorflow & keras


import tensorflow as tf
from tensorflow import keras

from sklearn.model_selection import train_test_split


print(f"TF version: {tf.version.VERSION}")

2024-04-29 17:42:30.490737: I external/local_tsl/tsl/cuda/cudart_stub.cc:32]


Could not find cuda drivers on your machine, GPU will not be used.
2024-04-29 17:42:30.501140: I external/local_tsl/tsl/cuda/cudart_stub.cc:32]
Could not find cuda drivers on your machine, GPU will not be used.
2024-04-29 17:42:32.672503: W
tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not
find TensorRT
TF version: 2.16.1

[3]: # import datasets


from tensorflow.keras.datasets import boston_housing

[4]: # import NN layers & other components


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense, BatchNormalization,␣
↪Dropout

from tensorflow.keras import optimizers

[5]: import pathlib # for path proccessing


import matplotlib.pyplot as plt # for plotting charts and plots
import numpy as np # for math and arrays
import pandas as pd # data from data

1
import seaborn as sns # for plotting

[6]: # tf.random.set_seed(13) # to make sure experiment is reproducible - setting␣


↪initial value

# tf.debugging.set_log_device_placement(False)

[7]: all_ds = pd.read_csv("/home/user/Desktop/BostonHousing.csv") # /home/user/


↪Desktop/

[8]: all_ds.head()

[8]: crim zn indus chas nox rm age dis rad tax ptratio \
0 0.00632 18.0 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3
1 0.02731 0.0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8
2 0.02729 0.0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8
3 0.03237 0.0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7
4 0.06905 0.0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7

b lstat medv
0 396.90 4.98 24.0
1 396.90 9.14 21.6
2 392.83 4.03 34.7
3 394.63 2.94 33.4
4 396.90 5.33 36.2

[9]: print(f"Number of rows/examples and columns in dataset: {all_ds.shape}")

Number of rows/examples and columns in dataset: (506, 14)

[10]: all_ds.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 crim 506 non-null float64
1 zn 506 non-null float64
2 indus 506 non-null float64
3 chas 506 non-null int64
4 nox 506 non-null float64
5 rm 506 non-null float64
6 age 506 non-null float64
7 dis 506 non-null float64
8 rad 506 non-null int64
9 tax 506 non-null int64
10 ptratio 506 non-null float64

2
11 b 506 non-null float64
12 lstat 506 non-null float64
13 medv 506 non-null float64
dtypes: float64(11), int64(3)
memory usage: 55.5 KB

[11]: # Data cleaning- understand code block below !


print(f"Display NA values in each columns(data cleaning): {all_ds.isna().
↪sum(axis = 0)}")

Display NA values in each columns(data cleaning): crim 0


zn 0
indus 0
chas 0
nox 0
rm 0
age 0
dis 0
rad 0
tax 0
ptratio 0
b 0
lstat 0
medv 0
dtype: int64

[12]: print(f"Display NA values in each rows(data cleaning): {all_ds.isna().sum(axis␣


↪= 1)}")

Display NA values in each rows(data cleaning): 0 0


1 0
2 0
3 0
4 0
..
501 0
502 0
503 0
504 0
505 0
Length: 506, dtype: int64

[13]: # Displaying NULL values in rows and columns


print(f"Display NA values in each rows(data cleaning): {all_ds.isnull().
↪sum(axis = 0)}")

Display NA values in each rows(data cleaning): crim 0


zn 0

3
indus 0
chas 0
nox 0
rm 0
age 0
dis 0
rad 0
tax 0
ptratio 0
b 0
lstat 0
medv 0
dtype: int64

[14]: print(f"Display NULL values in each columns(data cleaning): {all_ds.isnull().


↪sum(axis = 1)}")

Display NULL values in each columns(data cleaning): 0 0


1 0
2 0
3 0
4 0
..
501 0
502 0
503 0
504 0
505 0
Length: 506, dtype: int64

[15]: # Removing rows with NULL values


all_ds = all_ds.dropna()
print(f"After removing all rows with Null values: {all_ds}")

After removing all rows with Null values: crim zn indus chas nox
rm age dis rad tax \
0 0.00632 18.0 2.31 0 0.538 6.575 65.2 4.0900 1 296
1 0.02731 0.0 7.07 0 0.469 6.421 78.9 4.9671 2 242
2 0.02729 0.0 7.07 0 0.469 7.185 61.1 4.9671 2 242
3 0.03237 0.0 2.18 0 0.458 6.998 45.8 6.0622 3 222
4 0.06905 0.0 2.18 0 0.458 7.147 54.2 6.0622 3 222
.. … … … … … … … … … …
501 0.06263 0.0 11.93 0 0.573 6.593 69.1 2.4786 1 273
502 0.04527 0.0 11.93 0 0.573 6.120 76.7 2.2875 1 273
503 0.06076 0.0 11.93 0 0.573 6.976 91.0 2.1675 1 273
504 0.10959 0.0 11.93 0 0.573 6.794 89.3 2.3889 1 273
505 0.04741 0.0 11.93 0 0.573 6.030 80.8 2.5050 1 273

4
ptratio b lstat medv
0 15.3 396.90 4.98 24.0
1 17.8 396.90 9.14 21.6
2 17.8 392.83 4.03 34.7
3 18.7 394.63 2.94 33.4
4 18.7 396.90 5.33 36.2
.. … … … …
501 21.0 391.99 9.67 22.4
502 21.0 396.90 9.08 20.6
503 21.0 396.90 5.64 23.9
504 21.0 393.45 6.48 22.0
505 21.0 396.90 7.88 11.9

[506 rows x 14 columns]

[16]: # slice the data set - e.g.: keeping last 20 rows


n = 20
temp_ds = all_ds[:20]
print(f"Showing last 20 rows: {temp_ds}")

Showing last 20 rows: crim zn indus chas nox rm age


dis rad tax \
0 0.00632 18.0 2.31 0 0.538 6.575 65.2 4.0900 1 296
1 0.02731 0.0 7.07 0 0.469 6.421 78.9 4.9671 2 242
2 0.02729 0.0 7.07 0 0.469 7.185 61.1 4.9671 2 242
3 0.03237 0.0 2.18 0 0.458 6.998 45.8 6.0622 3 222
4 0.06905 0.0 2.18 0 0.458 7.147 54.2 6.0622 3 222
5 0.02985 0.0 2.18 0 0.458 6.430 58.7 6.0622 3 222
6 0.08829 12.5 7.87 0 0.524 6.012 66.6 5.5605 5 311
7 0.14455 12.5 7.87 0 0.524 6.172 96.1 5.9505 5 311
8 0.21124 12.5 7.87 0 0.524 5.631 100.0 6.0821 5 311
9 0.17004 12.5 7.87 0 0.524 6.004 85.9 6.5921 5 311
10 0.22489 12.5 7.87 0 0.524 6.377 94.3 6.3467 5 311
11 0.11747 12.5 7.87 0 0.524 6.009 82.9 6.2267 5 311
12 0.09378 12.5 7.87 0 0.524 5.889 39.0 5.4509 5 311
13 0.62976 0.0 8.14 0 0.538 5.949 61.8 4.7075 4 307
14 0.63796 0.0 8.14 0 0.538 6.096 84.5 4.4619 4 307
15 0.62739 0.0 8.14 0 0.538 5.834 56.5 4.4986 4 307
16 1.05393 0.0 8.14 0 0.538 5.935 29.3 4.4986 4 307
17 0.78420 0.0 8.14 0 0.538 5.990 81.7 4.2579 4 307
18 0.80271 0.0 8.14 0 0.538 5.456 36.6 3.7965 4 307
19 0.72580 0.0 8.14 0 0.538 5.727 69.5 3.7965 4 307

ptratio b lstat medv


0 15.3 396.90 4.98 24.0
1 17.8 396.90 9.14 21.6
2 17.8 392.83 4.03 34.7
3 18.7 394.63 2.94 33.4

5
4 18.7 396.90 5.33 36.2
5 18.7 394.12 5.21 28.7
6 15.2 395.60 12.43 22.9
7 15.2 396.90 19.15 27.1
8 15.2 386.63 29.93 16.5
9 15.2 386.71 17.10 18.9
10 15.2 392.52 20.45 15.0
11 15.2 396.90 13.27 18.9
12 15.2 390.50 15.71 21.7
13 21.0 396.90 8.26 20.4
14 21.0 380.02 10.26 18.2
15 21.0 395.62 8.47 19.9
16 21.0 386.85 6.58 23.1
17 21.0 386.75 14.67 17.5
18 21.0 288.99 11.69 20.2
19 21.0 390.95 11.28 18.2

[17]: print(f"Showing shape of last 20 rows: {temp_ds.shape}")

Showing shape of last 20 rows: (20, 14)


Sample dataset randomly and return fraction of it !
[18]: all_ds.sample(frac = 1) # will randomly shuffle data and will not be sorted
all_ds_90pct = all_ds.sample(frac = 0.9)

[19]: print(f"all_ds_90pct: {all_ds.sample(frac = 0.9)}")

all_ds_90pct: crim zn indus chas nox rm age dis rad


tax \
143 4.09740 0.0 19.58 0 0.871 5.468 100.0 1.4118 5 403
304 0.05515 33.0 2.18 0 0.472 7.236 41.1 4.0220 7 222
499 0.17783 0.0 9.69 0 0.585 5.569 73.5 2.3999 6 391
288 0.04590 52.5 5.32 0 0.405 6.315 45.6 7.3172 6 293
325 0.19186 0.0 7.38 0 0.493 6.431 14.7 5.4159 5 287
.. … … … … … … … … … …
110 0.10793 0.0 8.56 0 0.520 6.195 54.4 2.7778 5 384
300 0.04417 70.0 2.24 0 0.400 6.871 47.4 7.8278 5 358
179 0.05780 0.0 2.46 0 0.488 6.980 58.4 2.8290 3 193
379 17.86670 0.0 18.10 0 0.671 6.223 100.0 1.3861 24 666
458 7.75223 0.0 18.10 0 0.713 6.301 83.7 2.7831 24 666

ptratio b lstat medv


143 14.7 396.90 26.42 15.6
304 18.4 393.68 6.93 36.1
499 19.2 395.77 15.10 17.5
288 16.6 396.90 7.60 22.3
325 19.6 393.68 5.08 24.6

6
.. … … … …
110 20.9 393.49 13.00 21.7
300 14.8 390.86 6.07 24.8
179 17.8 396.90 5.04 37.2
379 20.2 393.74 21.78 10.2
458 20.2 272.21 16.23 14.9

[455 rows x 14 columns]

[43]: print(f"all_ds_90pct.shape: {all_ds_90pct.shape}")

all_ds_90pct.shape: (455, 14)

[21]: # Split data into 60% train and 40% test <- can be revised.
train_dataset, temp_test_dataset = train_test_split(all_ds_90pct, test_size = 0.
↪4)

[22]: print(f"train_dataset: {train_dataset.shape}") # 40% of Original Dataset

train_dataset: (273, 14)

[23]: print(f"temp_test_dataset: {temp_test_dataset.shape}")

temp_test_dataset: (182, 14)

[24]: test_dataset, valid_dataset = train_test_split(temp_test_dataset, test_size = 0.


↪5)

[25]: print(f"temp_test_test_dataset shape: {test_dataset.shape}")

temp_test_test_dataset shape: (91, 14)

[26]: print(f"temp_test_valid_dataset shape: {valid_dataset.shape}")

temp_test_valid_dataset shape: (91, 14)

[27]: print(f"Display data type of temp_test_test_dataset: {type(test_dataset)}")

Display data type of temp_test_test_dataset: <class


'pandas.core.frame.DataFrame'>

[28]: print(f"Train dataset: {train_dataset.shape}")

Train dataset: (273, 14)

[29]: print(f"Test dataset: {test_dataset.shape}")

Test dataset: (91, 14)

7
[30]: print(f"Validation dataset: {valid_dataset.shape}")

Validation dataset: (91, 14)

[31]: train_stats = train_dataset.describe()


train_stats.pop("medv")

sns.pairplot(train_stats,diag_kws=dict()) # this worked nicely !


plt.show()

[32]: # Example for correction in syntax above


# sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']],␣
↪diag_kind = True) # diag_kind = True or list WORKS !

8
[33]: print(f"train_stats: {train_stats}")

train_stats: crim zn indus chas nox


rm \
count 273.000000 273.000000 273.000000 273.000000 273.000000 273.000000
mean 3.402634 11.496337 10.811905 0.087912 0.553508 6.283212
std 8.911577 22.797824 6.777439 0.283687 0.119342 0.715935
min 0.006320 0.000000 0.460000 0.000000 0.385000 3.561000
25% 0.087070 0.000000 5.130000 0.000000 0.448000 5.879000
50% 0.249800 0.000000 8.560000 0.000000 0.524000 6.193000
75% 3.321050 18.000000 18.100000 0.000000 0.624000 6.579000
max 88.976200 95.000000 25.650000 1.000000 0.871000 8.780000

age dis rad tax ptratio b \


count 273.000000 273.000000 273.000000 273.000000 273.000000 273.000000
mean 69.677289 3.873237 9.260073 397.787546 18.392674 363.179048
std 27.088831 2.202065 8.538093 164.263795 2.218625 82.884919
min 6.200000 1.129600 1.000000 187.000000 12.600000 2.520000
25% 48.500000 2.103600 4.000000 279.000000 16.800000 377.510000
50% 79.200000 3.152300 5.000000 330.000000 18.900000 391.430000
75% 93.600000 5.400700 8.000000 437.000000 20.200000 396.900000
max 100.000000 12.126500 24.000000 666.000000 22.000000 396.900000

lstat
count 273.000000
mean 12.450110
std 6.945146
min 1.730000
25% 6.750000
50% 11.450000
75% 16.290000
max 37.970000

[34]: print(f"train_stats: {train_stats.transpose()}") # for better readability

train_stats: count mean std min 25%


50% \
crim 273.0 3.402634 8.911577 0.00632 0.08707 0.2498
zn 273.0 11.496337 22.797824 0.00000 0.00000 0.0000
indus 273.0 10.811905 6.777439 0.46000 5.13000 8.5600
chas 273.0 0.087912 0.283687 0.00000 0.00000 0.0000
nox 273.0 0.553508 0.119342 0.38500 0.44800 0.5240
rm 273.0 6.283212 0.715935 3.56100 5.87900 6.1930
age 273.0 69.677289 27.088831 6.20000 48.50000 79.2000
dis 273.0 3.873237 2.202065 1.12960 2.10360 3.1523
rad 273.0 9.260073 8.538093 1.00000 4.00000 5.0000
tax 273.0 397.787546 164.263795 187.00000 279.00000 330.0000
ptratio 273.0 18.392674 2.218625 12.60000 16.80000 18.9000

9
b 273.0 363.179048 82.884919 2.52000 377.51000 391.4300
lstat 273.0 12.450110 6.945146 1.73000 6.75000 11.4500

75% max
crim 3.32105 88.9762
zn 18.00000 95.0000
indus 18.10000 25.6500
chas 0.00000 1.0000
nox 0.62400 0.8710
rm 6.57900 8.7800
age 93.60000 100.0000
dis 5.40070 12.1265
rad 8.00000 24.0000
tax 437.00000 666.0000
ptratio 20.20000 22.0000
b 396.90000 396.9000
lstat 16.29000 37.9700

[35]: # The "Median Value" of the house.


# We have three datasets train, test, validate
# We save them into new variable

[36]: train_lables = train_dataset.pop("medv")


test_lables = test_dataset.pop("medv")
valid_lables = valid_dataset.pop("medv")

[37]: # Definig Fn to normalize Data -> formulae: (x - mean)/(sdt. devaition)

[38]: def norm(x):


return (x- train_stats["mean"])/train_stats["std"]

[39]: train_stats = {
"mean": train_dataset.mean(),
"std": train_dataset.std()
}

[40]: # We save them into new variable


normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
normed_valid_data = norm(valid_dataset)

[46]: # Train/Test/Validate Splits:

[45]: print(f"norm(train_dataset): {normed_train_data.shape}")

norm(train_dataset): (273, 13)

10
[47]: print(f"norm(test_dataset): {normed_test_data.shape}")

norm(test_dataset): (91, 13)

[48]: print(f"norm(validate_dataset: {normed_valid_data.shape}")

norm(validate_dataset: (91, 13)


Train/Test/Validate lables:

[50]: print(f"Train Lables: {train_lables.shape}")

Train Lables: (273,)

[51]: print(f"Test Lables: {test_lables.shape}")

Test Lables: (91,)

[52]: print(f"Validate Lables: {valid_lables.shape}")

Validate Lables: (91,)


Show a sample of data after normalized
[53]: normed_train_data.head(10,)

[53]: crim zn indus chas nox rm age \


33 -0.252583 -0.504273 -0.394235 -0.309891 -0.129949 -0.813220 0.934803
56 -0.379516 3.224153 -1.486093 -0.309891 -1.202496 0.139381 -1.254291
259 -0.308137 0.373003 -1.009512 -0.309891 0.783392 0.780500 1.119381
164 -0.130198 -0.504273 1.293718 -0.309891 0.431462 -0.599513 0.816673
307 -0.376287 0.943233 -1.273623 -0.309891 -0.682981 0.790278 0.022988
351 -0.372901 2.127557 -1.345922 -0.309891 -1.194117 0.413149 -1.246908
240 -0.369109 0.811642 -0.867865 -0.309891 -1.051669 0.857323 -0.567662
393 0.588238 -0.504273 1.075346 -0.309891 1.168838 -0.126006 0.846205
265 -0.296358 0.373003 -1.009512 -0.309891 0.783392 -1.010165 -0.253879
215 -0.359601 -0.504273 -0.032742 -0.309891 -0.540534 -0.141371 -1.006957

dis rad tax ptratio b lstat


33 -0.039071 -0.616071 -0.552694 1.175199 -0.053195 0.849498
56 2.413354 -0.850316 -0.516167 -0.492500 0.406841 -0.961839
259 -0.845814 -0.498949 -0.814468 -2.430637 0.346878 -0.799135
164 -0.659034 -0.498949 0.031732 -1.664397 0.385244 -0.116644
307 -0.313586 -0.264705 -1.070154 0.003302 0.406841 -0.708424
351 3.104842 -0.616071 0.080434 -0.041771 0.091705 -1.002155
240 1.118433 -0.381827 -0.595308 -0.808011 0.338674 -0.154080
393 -0.945493 1.726372 1.632815 0.814615 0.406841 0.391625
265 -0.856803 -0.498949 -0.814468 -2.430637 0.352548 -0.287987
215 0.032771 -0.616071 -0.735327 0.093448 0.367388 -0.429092

11
[55]: print(f"normed_train_data: {normed_train_data.head(10,)}")

normed_train_data: crim zn indus chas nox


rm age \
33 -0.252583 -0.504273 -0.394235 -0.309891 -0.129949 -0.813220 0.934803
56 -0.379516 3.224153 -1.486093 -0.309891 -1.202496 0.139381 -1.254291
259 -0.308137 0.373003 -1.009512 -0.309891 0.783392 0.780500 1.119381
164 -0.130198 -0.504273 1.293718 -0.309891 0.431462 -0.599513 0.816673
307 -0.376287 0.943233 -1.273623 -0.309891 -0.682981 0.790278 0.022988
351 -0.372901 2.127557 -1.345922 -0.309891 -1.194117 0.413149 -1.246908
240 -0.369109 0.811642 -0.867865 -0.309891 -1.051669 0.857323 -0.567662
393 0.588238 -0.504273 1.075346 -0.309891 1.168838 -0.126006 0.846205
265 -0.296358 0.373003 -1.009512 -0.309891 0.783392 -1.010165 -0.253879
215 -0.359601 -0.504273 -0.032742 -0.309891 -0.540534 -0.141371 -1.006957

dis rad tax ptratio b lstat


33 -0.039071 -0.616071 -0.552694 1.175199 -0.053195 0.849498
56 2.413354 -0.850316 -0.516167 -0.492500 0.406841 -0.961839
259 -0.845814 -0.498949 -0.814468 -2.430637 0.346878 -0.799135
164 -0.659034 -0.498949 0.031732 -1.664397 0.385244 -0.116644
307 -0.313586 -0.264705 -1.070154 0.003302 0.406841 -0.708424
351 3.104842 -0.616071 0.080434 -0.041771 0.091705 -1.002155
240 1.118433 -0.381827 -0.595308 -0.808011 0.338674 -0.154080
393 -0.945493 1.726372 1.632815 0.814615 0.406841 0.391625
265 -0.856803 -0.498949 -0.814468 -2.430637 0.352548 -0.287987
215 0.032771 -0.616071 -0.735327 0.093448 0.367388 -0.429092

[56]: print(f"normed_train_data.shape(): {normed_train_data}")

normed_train_data.shape(): crim zn indus chas nox


rm age \
33 -0.252583 -0.504273 -0.394235 -0.309891 -0.129949 -0.813220 0.934803
56 -0.379516 3.224153 -1.486093 -0.309891 -1.202496 0.139381 -1.254291
259 -0.308137 0.373003 -1.009512 -0.309891 0.783392 0.780500 1.119381
164 -0.130198 -0.504273 1.293718 -0.309891 0.431462 -0.599513 0.816673
307 -0.376287 0.943233 -1.273623 -0.309891 -0.682981 0.790278 0.022988
.. … … … … … … …
465 -0.026823 -0.504273 1.075346 -0.309891 0.850426 -0.732207 -0.792847
291 -0.372973 3.004833 -0.864914 -0.309891 -1.194117 1.207913 -1.549616
216 -0.376705 -0.504273 0.454168 3.215120 -0.029398 -0.552023 -0.504905
431 0.747305 -0.504273 1.075346 -0.309891 0.255497 0.767929 0.908962
11 -0.368640 0.044025 -0.434073 -0.309891 -0.247259 -0.383013 0.488124

dis rad tax ptratio b lstat


33 -0.039071 -0.616071 -0.552694 1.175199 -0.053195 0.849498
56 2.413354 -0.850316 -0.516167 -0.492500 0.406841 -0.961839
259 -0.845814 -0.498949 -0.814468 -2.430637 0.346878 -0.799135
164 -0.659034 -0.498949 0.031732 -1.664397 0.385244 -0.116644

12
307 -0.313586 -0.264705 -1.070154 0.003302 0.406841 -0.708424
.. … … … … … …
465 -0.366355 1.726372 1.632815 0.814615 -0.347217 0.241880
291 0.564681 -0.616071 -0.930135 0.363886 0.406841 -1.280046
216 -0.345647 -0.498949 -0.741414 -0.898157 0.357374 0.152609
431 -0.810619 1.726372 1.632815 0.814615 -3.400487 1.042439
11 1.068753 -0.498949 -0.528343 -1.439032 0.406841 0.118052

[273 rows x 13 columns]


Build a Neural Network - ( My reference: Method_02)

[58]: import tensorflow as tf


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from keras.layers import Input

[59]: # Initialize Sequential model

[60]: model = Sequential()

[61]: # Adding layers to above initiated model

[62]: model.add(Dense(units = 1, input_shape = (normed_train_data.shape), name =␣


↪"l1_Input"))

model.add(Dense(units=50, activation='relu', name = "l2_Hidden_1"))


model.add(Dense(units=50, activation='relu', name = "l3_Hidden_2"))
model.add(Dense(units=1, activation='sigmoid', name = "l4_Output"))

/home/user/.local/lib/python3.10/site-
packages/keras/src/layers/core/dense.py:86: UserWarning: Do not pass an
`input_shape`/`input_dim` argument to a layer. When using Sequential models,
prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)

[63]: print(f"model.summary(): {model.summary()}")

Model: "sequential"

����������������������������������������������������������������������������
� Layer (type) � Output Shape � Param # �
����������������������������������������������������������������������������
� l1_Input (Dense) � (None, 273, 1) � 14 �
����������������������������������������������������������������������������
� l2_Hidden_1 (Dense) � (None, 273, 50) � 100 �
����������������������������������������������������������������������������
� l3_Hidden_2 (Dense) � (None, 273, 50) � 2,550 �
����������������������������������������������������������������������������

13
� l4_Output (Dense) � (None, 273, 1) � 51 �
����������������������������������������������������������������������������

Total params: 2,715 (10.61 KB)

Trainable params: 2,715 (10.61 KB)

Non-trainable params: 0 (0.00 B)

model.summary(): None

[ ]:

14

You might also like