The “layer_dense” Function in R
Package: keras
Purpose: Add a densely-connected neural network layer.
General class: Layer
Required argument(s):
units: Positive integer, dimensionality of the output space.
Notable optional arguments:
activation: Activation function to use (e.g., ‘relu’, ‘sigmoid’).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the kernel weights matrix.
bias_initializer: Initializer for the bias vector.
input_shape: Shape of the input (required only for the first layer).
Example:
# Load the required library
library(keras)
# Define a simple neural network model with one dense layer
model <- keras_model_sequential() %>%
layer_dense(units = 32, activation = 'relu', input_shape = c(784)) %>%
layer_dense(units = 10, activation = 'softmax')
# Compile the model
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_rmsprop(),
metrics = c('accuracy')
)
# Summary of the model
summary(model)In this example, the layer_dense function from the keras package is used to add two densely-connected layers to a neural network model. The first layer has 32 units with a ReLU activation function and expects an input shape of 784 (typical for MNIST dataset). The second layer has 10 units with a softmax activation function, suitable for classification into 10 classes. The model is then compiled with categorical cross-entropy loss, RMSprop optimizer, and accuracy metric.