About 1,980 results
Open links in new tab
  1. Optimizers - Keras

    Base Optimizer API These methods and attributes are common to all Keras optimizers. [source] Optimizer class keras.optimizers.Optimizer()

  2. SGD - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  3. Optimizers - Keras

    Optimizers SGD RMSprop Adam AdamW Adadelta Adagrad Adamax Adafactor Nadam Ftrl [source] apply_gradients method Optimizer.apply_gradients( grads_and_vars, name=None, …

  4. Muon - Keras

    learning_rate: A float, keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use. The learning rate.

  5. Adam - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  6. Ftrl - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  7. Lamb - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  8. LearningRateSchedule - Keras

    The learning rate schedule base class. You can use a learning rate schedule to modulate how the learning rate of your optimizer changes over time. Several built-in learning rate schedules are …

  9. Adam - Keras

    learning_rate: A tf.Tensor, floating point value, a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments …

  10. ExponentialDecay - Keras

    If the argument staircase is True, then step / decay_steps is an integer division and the decayed learning rate follows a staircase function. You can pass this schedule directly into a …