.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "auto_examples/plot_first_example.py"
.. LINE NUMBERS ARE GIVEN BELOW.
.. only:: html
.. note::
:class: sphx-glr-download-link-note
:ref:`Go to the end `
to download the full example code.
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_auto_examples_plot_first_example.py:
.. _l-onnx-array-first-api-example:
First examples with onnx-array-api
==================================
This demonstrates an easy case with :epkg:`onnx-array-api`.
It shows how a function can be easily converted into
ONNX.
A loss function from numpy to ONNX
++++++++++++++++++++++++++++++++++
The first example takes a loss function and converts it into ONNX.
.. GENERATED FROM PYTHON SOURCE LINES 17-24
.. code-block:: Python
import numpy as np
from onnx_array_api.npx import absolute, jit_onnx
from onnx_array_api.plotting.text_plot import onnx_simple_text_plot
.. GENERATED FROM PYTHON SOURCE LINES 25-26
The function looks like a numpy function.
.. GENERATED FROM PYTHON SOURCE LINES 26-30
.. code-block:: Python
def l1_loss(x, y):
return absolute(x - y).sum()
.. GENERATED FROM PYTHON SOURCE LINES 31-35
The function needs to be converted into ONNX with function jit_onnx.
jitted_l1_loss is a wrapper. It intercepts all calls to l1_loss.
When it happens, it checks the input types and creates the
corresponding ONNX graph.
.. GENERATED FROM PYTHON SOURCE LINES 35-37
.. code-block:: Python
jitted_l1_loss = jit_onnx(l1_loss)
.. GENERATED FROM PYTHON SOURCE LINES 38-42
First execution and conversion to ONNX.
The wrapper caches the created onnx graph.
It reuses it if the input types and the number of dimension are the same.
It creates a new one otherwise and keep the old one.
.. GENERATED FROM PYTHON SOURCE LINES 42-49
.. code-block:: Python
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
res = jitted_l1_loss(x, y)
print(res)
.. rst-class:: sphx-glr-script-out
.. code-block:: none
0.09999999
.. GENERATED FROM PYTHON SOURCE LINES 50-51
The ONNX graph can be accessed the following way.
.. GENERATED FROM PYTHON SOURCE LINES 51-53
.. code-block:: Python
print(onnx_simple_text_plot(jitted_l1_loss.get_onnx()))
.. rst-class:: sphx-glr-script-out
.. code-block:: none
opset: domain='' version=18
input: name='x0' type=dtype('float32') shape=['', '']
input: name='x1' type=dtype('float32') shape=['', '']
Sub(x0, x1) -> r__0
Abs(r__0) -> r__1
ReduceSum(r__1, keepdims=0) -> r__2
output: name='r__2' type=dtype('float32') shape=None
.. GENERATED FROM PYTHON SOURCE LINES 54-56
We can also define a more complex loss by computing L1 loss on
the first column and L2 loss on the seconde one.
.. GENERATED FROM PYTHON SOURCE LINES 56-80
.. code-block:: Python
def l1_loss(x, y):
return absolute(x - y).sum()
def l2_loss(x, y):
return ((x - y) ** 2).sum()
def myloss(x, y):
return l1_loss(x[:, 0], y[:, 0]) + l2_loss(x[:, 1], y[:, 1])
jitted_myloss = jit_onnx(myloss)
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
res = jitted_myloss(x, y)
print(res)
print(onnx_simple_text_plot(jitted_myloss.get_onnx()))
.. rst-class:: sphx-glr-script-out
.. code-block:: none
0.042
opset: domain='' version=18
input: name='x0' type=dtype('float32') shape=['', '']
input: name='x1' type=dtype('float32') shape=['', '']
Constant(value=[1]) -> cst__0
Constant(value=[2]) -> cst__1
Constant(value=[1]) -> cst__2
Slice(x0, cst__0, cst__1, cst__2) -> r__12
Constant(value=[1]) -> cst__3
Constant(value=[2]) -> cst__4
Constant(value=[1]) -> cst__5
Slice(x1, cst__3, cst__4, cst__5) -> r__14
Constant(value=[0]) -> cst__6
Constant(value=[1]) -> cst__7
Constant(value=[1]) -> cst__8
Slice(x0, cst__6, cst__7, cst__8) -> r__16
Constant(value=[0]) -> cst__9
Constant(value=[1]) -> cst__10
Constant(value=[1]) -> cst__11
Slice(x1, cst__9, cst__10, cst__11) -> r__18
Constant(value=[1]) -> cst__13
Squeeze(r__12, cst__13) -> r__20
Constant(value=[1]) -> cst__15
Squeeze(r__14, cst__15) -> r__21
Sub(r__20, r__21) -> r__24
Constant(value=[1]) -> cst__17
Squeeze(r__16, cst__17) -> r__22
Constant(value=[1]) -> cst__19
Squeeze(r__18, cst__19) -> r__23
Sub(r__22, r__23) -> r__25
Abs(r__25) -> r__28
ReduceSum(r__28, keepdims=0) -> r__30
Constant(value=2) -> r__26
CastLike(r__26, r__24) -> r__27
Pow(r__24, r__27) -> r__29
ReduceSum(r__29, keepdims=0) -> r__31
Add(r__30, r__31) -> r__32
output: name='r__32' type=dtype('float32') shape=None
.. GENERATED FROM PYTHON SOURCE LINES 81-83
Eager mode
++++++++++
.. GENERATED FROM PYTHON SOURCE LINES 83-110
.. code-block:: Python
import numpy as np
from onnx_array_api.npx import absolute, eager_onnx
def l1_loss(x, y):
"""
err is a type inheriting from
:class:`EagerTensor `.
It needs to be converted to numpy first before any display.
"""
err = absolute(x - y).sum()
print(f"l1_loss={err.numpy()}")
return err
def l2_loss(x, y):
err = ((x - y) ** 2).sum()
print(f"l2_loss={err.numpy()}")
return err
def myloss(x, y):
return l1_loss(x[:, 0], y[:, 0]) + l2_loss(x[:, 1], y[:, 1])
.. GENERATED FROM PYTHON SOURCE LINES 111-118
Eager mode is enabled by function :func:`eager_onnx
`.
It intercepts all calls to `my_loss`. On the first call,
it replaces a numpy array by a tensor corresponding to the
selected runtime, here numpy as well through
:class:`EagerNumpyTensor
`.
.. GENERATED FROM PYTHON SOURCE LINES 118-123
.. code-block:: Python
eager_myloss = eager_onnx(myloss)
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
.. GENERATED FROM PYTHON SOURCE LINES 124-130
First execution and conversion to ONNX.
The wrapper caches many Onnx graphs corresponding to
simple opeator, (`+`, `-`, `/`, `*`, ...), reduce functions,
any other function from the API.
It reuses it if the input types and the number of dimension are the same.
It creates a new one otherwise and keep the old ones.
.. GENERATED FROM PYTHON SOURCE LINES 130-133
.. code-block:: Python
res = eager_myloss(x, y)
print(res)
.. rst-class:: sphx-glr-script-out
.. code-block:: none
l1_loss=0.03999999910593033
l2_loss=0.001999999163672328
0.042
.. GENERATED FROM PYTHON SOURCE LINES 134-136
There is no ONNX graph to show. Every operation
is converted into small ONNX graphs.
.. rst-class:: sphx-glr-timing
**Total running time of the script:** (0 minutes 0.104 seconds)
.. _sphx_glr_download_auto_examples_plot_first_example.py:
.. only:: html
.. container:: sphx-glr-footer sphx-glr-footer-example
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: plot_first_example.ipynb `
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: plot_first_example.py `
.. container:: sphx-glr-download sphx-glr-download-zip
:download:`Download zipped: plot_first_example.zip `
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery `_