diff --git a/apps/dev-workbench/images/1-X4y-Q8xws-KRS1f-GFc-J3-C3-A.png b/apps/dev-workbench/images/1-X4y-Q8xws-KRS1f-GFc-J3-C3-A.png new file mode 100644 index 000000000..e5e6bb090 Binary files /dev/null and b/apps/dev-workbench/images/1-X4y-Q8xws-KRS1f-GFc-J3-C3-A.png differ diff --git a/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810034956.png b/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810034956.png new file mode 100644 index 000000000..4545a2bcf Binary files /dev/null and b/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810034956.png differ diff --git a/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810043400.png b/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810043400.png new file mode 100644 index 000000000..6910571fa Binary files /dev/null and b/apps/dev-workbench/images/Deepin-Screenshot-select-area-20200810043400.png differ diff --git a/apps/dev-workbench/images/out-1.gif b/apps/dev-workbench/images/out-1.gif new file mode 100644 index 000000000..10b886edd Binary files /dev/null and b/apps/dev-workbench/images/out-1.gif differ diff --git a/apps/dev-workbench/readme.md b/apps/dev-workbench/readme.md index d2e4571c3..119378999 100644 --- a/apps/dev-workbench/readme.md +++ b/apps/dev-workbench/readme.md @@ -10,7 +10,7 @@ There are 3 steps involved in creating a model: ## 1. Creating/selecting dataset Since we're working with tensorflow.js, a preferred type of datasets are `spritesheets` (like MNIST [sprite](https://storage.googleapis.com/learnjs-data/model-builder/mnist_images.png) in [this](https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#2) example). - + The workbench accepts only a *zip file* as dataset input which **must contain** three files: ***spritesheet*** (data.jpg), a ***binary labels*** (labels.bin) and the ***classes*** (labelnames.csv) with the filenames as specified. If user has the above specified format of dataset than it can be browsed directly and they can skip to Step 2 - Customising/training the model. @@ -43,7 +43,7 @@ Some features are: ### Complete Layer Customisation User can add/remove/modify the CNN layers accordingly to get the desired output. - + [model.compile()](https://js.tensorflow.org/api/latest/#tf.LayersModel.compile) and [model.fit()](https://js.tensorflow.org/api/latest/#tf.LayersModel.fit) functions and their parameters can also be modified accordingly by the user. @@ -52,7 +52,7 @@ User can add/remove/modify the CNN layers accordingly to get the desired output. - Export can be done just after step-1 (dataset selection) or anytime during layers customisation. - A zip file will be exported which can be imported anytime by using the import option. - + ### Basic/advanced Mode The user can toggle the **_advanced mode_** from the options. This mode is targeted towards more advanced users who might want to customize their models in more detailed fashion. @@ -68,7 +68,7 @@ Though server-side training does not work with visualization using [tfjs-vis](h ### Training visualisation After the user clicks on ‘train’ the training process will start and visualization will be shown using [TFjs-vis](https://github.com/tensorflow/tfjs/tree/master/tfjs-vis). (Only in browser-training mode) - + ### Parameters and valid values | Name | Value | diff --git a/apps/dev-workbench/workbench.html b/apps/dev-workbench/workbench.html index 2cd832a60..e92745660 100644 --- a/apps/dev-workbench/workbench.html +++ b/apps/dev-workbench/workbench.html @@ -71,6 +71,7 @@ +