Skip to content

Commit 32df27b

Browse files
authored
Updated docs for custom operations. Closes #1170 (#1211)
1 parent c3f4264 commit 32df27b

4 files changed

Lines changed: 34 additions & 27 deletions

File tree

docs/fundamentals/custom_operations.rst

Lines changed: 34 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -5,60 +5,67 @@ In this document we will outline the basics of custom operations including the o
55

66
The Basics
77
----------
8-
Operations are used in pipelines and have named, typed inputs and outputs. When creating a pipeline, if you don't currently find an operation for the given task, you can easily create your own by selecting the `New Operation...` operation from the add operation dialog. This will create a new operation definition and open it in the operation editor. The operation editor has two main parts, the interface editor and the implementation editor.
8+
Operations are used in pipelines and have named inputs and outputs. When creating a pipeline, if you don't currently find an operation for the given task, you can easily create your own by selecting the `New Operation...` operation from the add operation dialog. This will create a new operation definition and open it in the operation editor. The operation editor has two main parts, the interface editor and the implementation editor.
99

1010
.. figure:: operation_editor.png
1111
:align: center
1212
:scale: 45 %
1313

14-
Editing the "train" operation provided in the "First Steps" section
14+
Editing the "Train" operation from the "CIFAR10" example
1515

1616
The interface editor is provided on the left and presents the interface as a diagram showing the input data and output data as objects flowing into or out of the given operation. Selecting the operation node in the operation interface editor will expand the node and allow the user to add or edit attributes for the given operation. These attributes are exposed when using this operation in a pipeline and can be set at design time - that is, these are set when creating the given pipeline. The interface diagram may also contain light blue nodes flowing into the operation. These nodes represent "references" that the operation accepts as input before running. When using the operation, references will appear alongside the attributes but will allow the user to select from a list of all possible targets when clicked.
1717

1818
.. figure:: operation_interface.png
1919
:align: center
2020
:scale: 85 %
2121

22-
The train operation accepts training data, an architecture and criterion and returns a trained model
22+
The train operation accepts training data, a model and attributes for shuffling data, setting the batch size, and the number of epochs.
2323

24-
On the right of the operation editor is the implementation editor. The implementation editor is a code editor specially tailored for programming the implementations of operations in DeepForge. This includes some autocomplete support for common globals in this context like the :code:`deepforge` and :code:`torch` globals. It also is synchronized with the interface editor and will provide input to the interface editor about unused variables, etc. These errors will present themselves as error or warning highlights on the data in the interface editor. A section of the implementation is shown below:
24+
On the right of the operation editor is the implementation editor. The implementation editor is a code editor specially tailored for programming the implementations of operations in DeepForge. It also is synchronized with the interface editor. A section of the implementation is shown below:
2525

26-
.. code:: lua
26+
.. code:: python
27+
import keras
28+
from matplotlib import pyplot as plt
2729
28-
trainer = nn.StochasticGradient(net, criterion)
29-
trainer.learningRate = attributes.learningRate
30-
trainer.maxIteration = attributes.maxIterations
30+
class Train():
31+
def __init__(self, model, shuffle=True, epochs=100, batch_size=32):
32+
self.model = model
33+
34+
self.epochs = epochs
35+
self.shuffle = shuffle
36+
self.batch_size = batch_size
37+
return
3138
32-
print('training for ' .. tostring(attributes.maxIterations) .. ' iterations (max)')
33-
print('learning rate is ' .. tostring(attributes.learningRate))
34-
print(trainer)
3539
36-
-- Adding the error graph
37-
graph = deepforge.Graph('Training Error') -- creating graph feedback
38-
errLine = graph:line('error')
39-
trainer.hookIteration = function(t, iter, currentErr)
40-
errLine:add(iter, currentErr) -- reporting the current error (will update in real time in DeepForge)
41-
end
40+
def execute(self, training_data):
41+
(x_train, y_train) = training_data
42+
opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
43+
self.model.compile(loss='categorical_crossentropy',
44+
optimizer=opt,
45+
metrics=['accuracy'])
46+
plot_losses = PlotLosses()
47+
self.model.fit(x_train, y_train,
48+
self.batch_size,
49+
epochs=self.epochs,
50+
callbacks=[plot_losses],
51+
shuffle=self.shuffle)
52+
53+
model = self.model
54+
return model
4255
43-
trainer:train(trainset)
56+
The "Train" operation uses capabilities from the :code:`keras` package to train the neural network. This operation sets all the parameters using values provided to the operation as either attributes or references. In the implementation, attributes are provided as arguments to the constructor making the user defined attributes accessible from within the implementation. References are treated similarly to operation inputs and are also arguments to the constructor. This can be seen with the :code:`model` constructor argument. Finally, operations return their outputs in the :code:`execute` method; in this example, it returns a single output named :code:`model`, that is, the trained neural network.
4457

45-
return {
46-
net = net
47-
}
48-
49-
The "train" operation uses the :code:`StochasticGradient` functionality from the :code:`nn` package to perform stochastic gradient descent. This operation sets all the parameters using values provided to the operation as either attributes or references. In the implementation, attributes are provided by the :code:`attributes` variable and provides access to the user defined attributes from within the implementation. References are treated similarly to operation inputs and are defined in variables of the same name. This can be seen with the :code:`net` and :code:`criterion` variables in the first line. Finally, operations return a table of their named outputs; in this example, it returns a single output named :code:`net`, that is, the trained neural network.
50-
51-
After defining the interface and implementation, we can now use the "train" operation in our pipelines! An example is shown below.
58+
After defining the interface and implementation, we can now use the "Train" operation in our pipelines! An example is shown below.
5259

5360
.. figure:: train_operation.png
5461
:align: center
5562
:scale: 85 %
5663

57-
Using the custom "train" operation in a pipeline
64+
Using the "Train" operation in a pipeline
5865

5966
Operation feedback
6067
------------------
61-
Operations in DeepForge can generate metadata about its execution. This metadata is generated during the execution and provided back to the user in real-time. An example of this includes providing real-time plotting feedback of the loss function of a model while training. When implementing an operation in DeepForge, this metadata can be created using the :code:`deepforge` global.
68+
Operations in DeepForge can generate metadata about its execution. This metadata is generated during the execution and provided back to the user in real-time. An example of this includes providing real-time plotting feedback. When implementing an operation in DeepForge, this metadata can be created using the :code:`matplotlib` plotting capabilities.
6269

6370
.. figure:: graph_example.png
6471
:align: center
47.9 KB
Loading
2.63 KB
Loading
-2.44 KB
Loading

0 commit comments

Comments
 (0)