About 328 results
Open links in new tab
  1. Optimizers - Keras

    Base Optimizer API These methods and attributes are common to all Keras optimizers. [source] Optimizer class keras.optimizers.Optimizer()

  2. SGD - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  3. Optimizers - Keras

    Optimizers SGD RMSprop Adam AdamW Adadelta Adagrad Adamax Adafactor Nadam Ftrl [source] apply_gradients method Optimizer.apply_gradients( grads_and_vars, name=None, …

  4. Muon - Keras

    learning_rate: A float, keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use. The learning rate.

  5. Adam - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  6. Ftrl - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  7. Lamb - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  8. Keras 2 API documentation

    Keras 3 API documentation Keras 2 API documentation Models API Layers API Callbacks API Optimizers Metrics Losses Data loading Built-in small datasets Keras Applications Mixed …

  9. LearningRateSchedule - Keras

    Several built-in learning rate schedules are available, such as keras.optimizers.schedules.ExponentialDecay or …

  10. Adagrad - Keras

    Note that Adagrad tends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0.