Răsfoiți Sursa

Update grapher.css

Lutz Roeder 1 an în urmă
părinte
comite
ad4425a9a0
3 a modificat fișierele cu 0 adăugiri și 6 ștergeri
  1. 0 1
      source/armnn-metadata.json
  2. 0 3
      source/grapher.css
  3. 0 2
      source/keras-metadata.json

+ 0 - 1
source/armnn-metadata.json

@@ -104,7 +104,6 @@
   },
   {
     "name": "DetectionPostProcessLayer",
-    "category": "Custom",
     "attributes": [
       { "name": "maxDetections", "type": "uint32" },
       { "name": "maxClassesPerDetection", "type": "uint32" },

+ 0 - 3
source/grapher.css

@@ -25,8 +25,6 @@
 .node-item-type-control:hover path { fill: #fff; }
 
 .node-item-type-layer path { fill: rgb(51, 85, 136); }
-.node-item-type-wrapper path { fill: rgb(238, 238, 238); }
-.node-item-type-wrapper text { fill: rgb(0, 0, 0) }
 .node-item-type-activation path { fill: rgb(112, 41, 33); }
 .node-item-type-pool path { fill: rgb(51, 85, 51); }
 .node-item-type-normalization path { fill: rgb(51, 85, 68); }
@@ -37,7 +35,6 @@
 .node-item-type-data path { fill: rgb(85, 85, 85); }
 .node-item-type-quantization path { fill: rgb(80, 40, 0); }
 .node-item-type-attention path { fill: rgb(120, 60, 0); }
-.node-item-type-custom path { fill: rgb(128, 128, 128); }
 
 .node-item-input path { fill: #fff; }
 .node-item-input:hover { cursor: pointer; }

+ 0 - 2
source/keras-metadata.json

@@ -649,7 +649,6 @@
   {
     "name": "Bidirectional",
     "module": "keras.layers",
-    "category": "Wrapper",
     "description": "Bidirectional wrapper for RNNs.",
     "attributes": [
       {
@@ -4446,7 +4445,6 @@
   {
     "name": "TimeDistributed",
     "module": "keras.layers",
-    "category": "Wrapper",
     "description": "This wrapper allows to apply a layer to every temporal slice of an input.\n\nEvery input should be at least 3D, and the dimension of index one of the\nfirst input will be considered to be the temporal dimension.\n\nConsider a batch of 32 video samples, where each sample is a 128x128 RGB\nimage with `channels_last` data format, across 10 timesteps.\nThe batch input shape is `(32, 10, 128, 128, 3)`.\n\nYou can then use `TimeDistributed` to apply the same `Conv2D` layer to each\nof the 10 timesteps, independently:\n\n```\n>>> inputs = layers.Input(shape=(10, 128, 128, 3), batch_size=32)\n>>> conv_2d_layer = layers.Conv2D(64, (3, 3))\n>>> outputs = layers.TimeDistributed(conv_2d_layer)(inputs)\n>>> outputs.shape\n(32, 10, 126, 126, 64)\n```\n\nBecause `TimeDistributed` applies the same instance of `Conv2D` to each of\nthe timestamps, the same set of weights are used at each timestamp.",
     "attributes": [
       {