Loading TOC...

cntk:dropout

cntk:dropout(
   $x as cntk:variable,
   $droupout-rate as xs:double,
   [$seed as xs:unsignedLong],
   [$name as xs:string]
) as cntk:function

Summary

Each element of the input is independently set to 0 with probability dropout_rate or to 1 / (1 - dropout_rate) times its original value (with probability 1-dropout_rate). Dropout is a good way to reduce overfitting. This behavior only happens during training. During inference dropout is a no-op. In the paper that introduced dropout it was suggested to scale the weights during inference. In CNTK’s implementation, because the values that are not set to 0 are multiplied with (1 / (1 - dropout_rate)), this is not necessary.

Parameters
$x Input tensor.
$droupout-rate Probability that an element of x will be set to zero.
$seed Random seed.
$name The name of the function instance in the network.

Example

  let $input-variable1 := cntk:input-variable(cntk:shape((3)), "float",
    fn:false(), fn:false(), "feature")
  let $input-variable2 := cntk:input-variable(cntk:shape((3)), "float",
    fn:false(), fn:false(), "feature")
  return cntk:dropout($input-variable2, .603922242018333, 1193159359, "jYpVI")
  => cntk:function(Composite Dropout (Input(Name(feature), Shape([3]),
  Dynamic Axes([Sequence Axis(Default Dynamic Axis), Batch Axis(Default Batch Axis)]))))

Stack Overflow iconStack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.