CIS 241, Dr. Ladd
🧠🧠🧠
AKA Deep Learning, Neural Nets, Artifical Neurons
A Visual and Interactive Guide to the Basics of Neural Networks
(Instead of Ordinary Least Squares or other
methods.)
Chantal
Brousseau in Programming Historian
These activation functions allow you to create nonlinear relationships and get more sophisticated predictions.
I.e. we are trying to eliminate as much loss or error as possible. We can add nodes and layers to our network iteratively to do better at the task.
MLPClassifier
: Multi-Layer Perceptron
Other key libraries: TensorFlow and Keras
hidden_layer_sizes
: number
and size of hidden layersactivation
: type of activation
function to usesolver
: solving method. ‘sgd’ and
‘adam’ are both stochastic gradient descent and useful for larger
datasets. ‘lbfgs’ is better for small data.alpha
: strength of regularization
(helps with outliers)learning_rate
and
learning_rate_init
: for gradient descent only, determines
the size of the stepsmax_iter
: the number of iterations or
epochs until the model convergesrandom_state
Can you get a cross-validation accuracy above 80%?