Loading TOC...

cntk:optimized-rnnstack

cntk:optimized-rnnstack(
   $operand as cntk:variable,
   $weights as xs:double,
   $hidden-size as xs:unsignedLong,
   $num-layers as xs:unsignedLong,
   [$bidirectional as xs:boolean],
   [$recurrent-op as xs:string],
   [$name as xs:string]
) as cntk:function

Summary

An RNN implementation that uses the primitives in cuDNN. If cuDNN is not available it fails. You can use convert_optimized_rnnstack to convert a model to GEMM-based implementation when no cuDNN.

Parameters
$operand Input of the optimized RNN stack.
$weights Parameter tensor that holds the learned weights.
$hidden-size Number of hidden units in each layer (and in each direction).
$num-layers Number of layers in the stack.
$bidirectional Whether each layer should compute both in forward and separately in backward mode and concatenate the results (if True the output is twice the hidden_size). The default is False which means the recurrence is only computed in the forward direction.
$recurrent-op One of ‘lstm’, ‘gru’, ‘relu’, or ‘tanh’.
$name The name of the function instance in the network.

Example

  xquery version "1.0-ml";
  (: needs gpu send to yangwei:)
  let $shape := cntk:shape((3))
  let $input-variable1 := cntk:input-variable($shape, "float")
  let $weights := cntk:parameter-from-scalar(cntk:shape((122800)),"float",2)
  let $model := cntk:optimized-rnnstack($input-variable1, $weights, 100, 2, fn:false(), "lstm", ">u6X)XR_.Z")
  
  let $input-value := cntk:value($shape, json:to-array((1 to cntk:shape-total-size($shape))))
  let $pair1 := json:to-array(($input-variable1, $input-value))
  (:let $output-variable := cntk:output-variable(cntk:shape((2)), "float", (), fn:true()):)
  let $output-variable := cntk:function-output($model)
  let $output-value := cntk:evaluate($model, $pair1, $output-variable)
  return (fn:replace(xdmp:quote($output-value), "0x[a-z|A-Z|0-9]*", "Value"))
  => cntk:value(Shape([100 x 1 x 1]), Device Kind Name(GPU))

Stack Overflow iconStack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.