X Tutup
Skip to content

DOC: DLPack example in "Interoperability examples" misleading #30936

@lucyleeow

Description

@lucyleeow

Issue with current documentation:

The note in "Interoperability examples" -> "Example: DLPack" does not seem accurate:
https://numpy.org/devdocs/user/basics.interoperability.html#example-dlpack

It gives the example of:

x_torch = torch.arange(5, device='cuda')
np.from_dlpack(x_torch)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Unsupported device in DLTensor.

Which does indeed fail with that error, but that is because device is left as the default None and from the from_dlpack docs, this means:

device (Optional[device]) –

device on which to place the created array. If device is None and x supports DLPack, the output array must be on the same device as x. Default: None.

Of course this would fail because NumPy does not support GPU, but if you just specify "cpu", the code would work.
(used v2.4.2 - though NumPy has supported DLPack v1 for a while now I think):

x_torch = torch.arange(5, device='cuda')
np.from_dlpack(x_torch, device='cpu')

I am assuming the device default was because originally was meant to be zero-copy (dmlc/dlpack#132 (comment)) but I think from DLPack v1, cross-device movement is now supported (data-apis/array-api#741).

Thus this seems misleading:

Note that GPU tensors can’t be converted to NumPy arrays since NumPy doesn’t support GPU devices:

Idea or request for content:

(for context, noticed here: scikit-learn/scikit-learn#32755 (comment))

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      X Tutup