Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add cl_khr_unified_svm extension #1282

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

bashbaug
Copy link
Contributor

@bashbaug bashbaug commented Nov 8, 2024

This PR contains the draft specification for cl_khr_unified_svm. It is intended to provide early access for review and community feedback, and it is not a ratified specification.

cl_khr_unified_svm adds additional types of Shared Virtual Memory (SVM) to OpenCL. Compared to Coarse-Grained and Fine-Grained SVM in OpenCL 2.0 and newer, the additional types of SVM added by this this extension provides:

  • Sufficient functionality to implement "Unified Shared Memory" (USM) in other APIs, such as SYCL.

  • Additional control over the ownership and accessibility of SVM allocations, to more precisely choose between application performance and programmer convenience.

  • A simpler programming model, by automatically migrating more SVM allocations between devices and the host, or by accessing more SVM allocations on the host without needing to map or unmap the allocation.

Specifically, this extension provides:

  • Extensible interfaces to support many types of SVM, including the SVM types defined in core OpenCL, in this extension, and additional SVM types defined by other combinations of SVM capabilities.

  • Explicit control over memory placement and migration by supporting device-owned SVM allocations for best performance, host-owned SVM allocations for wide visibility, and shared SVM allocations that may migrate between devices and the host.

  • The ability to query detailed SVM capabilities for each SVM allocation type supported by a platform and device.

  • Additional properties to control how memory is allocated and freed, including properties to associate an SVM allocation with both a device and a context.

  • A mechanism to indicate that a kernel may access SVM allocations indirectly, without passing a set of indirectly accessed SVM allocations to the kernel, improving usability and reducing driver overhead for kernels that access many SVM allocations.

  • A new query function to query properties of an SVM allocation.

  • A new function to suggest an SVM allocation type for a set of SVM capabilities.

Because the interfaces defined by this specification are not final and are subject to change they are not intended to be used by shipping software products. If you are interested in using this feature in your software product, please let us know!

Copy link

@illuhad illuhad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to see at last what you've been working on :)

You asked for feedback, so here are some things I noticed:

  • I can see that the ability to request individual SVM features can be very powerful, if implementations are happy to carry the burden of dealing with all of the possible combinations. However, I am a bit concerned with adoption if the "common cases" of device/host/shared SVM that you describe are hidden underneath some not necessarily obvious combinations of bitfield values. Can we perhaps have convenience shortcuts for these common cases? This could make user code simpler and make it more attractive to use this memory management model, and it could provide guidance for users and vendors if everybody knew clearly which combinations of capabilities to focus their testing on. This could help portability and adoption.
  • Can we have the ability to register an existing memory allocation to make it device accessible, similarly to cudaHostRegister?

@bashbaug
Copy link
Contributor Author

Thank you for the feedback!

  • I can see that the ability to request individual SVM features can be very powerful, if implementations are happy to carry the burden of dealing with all of the possible combinations. However, I am a bit concerned with adoption if the "common cases" of device/host/shared SVM that you describe are hidden underneath some not necessarily obvious combinations of bitfield values. Can we perhaps have convenience shortcuts for these common cases? This could make user code simpler and make it more attractive to use this memory management model, and it could provide guidance for users and vendors if everybody knew clearly which combinations of capabilities to focus their testing on. This could help portability and adoption.

This is the intent behind the SVM type "capability macros", which we've defined in the spec and in the headers. We've currently defined "macros" for the common cases we've been thinking about, and if other combinations become common they can be added with a simple header update.

Specifically, we've defined capability macros for:

  • CL_SVM_TYPE_MACRO_COARSE_GRAIN_BUFFER_KHR
  • CL_SVM_TYPE_MACRO_FINE_GRAIN_BUFFER_KHR
  • CL_SVM_TYPE_MACRO_DEVICE_KHR
  • CL_SVM_TYPE_MACRO_HOST_KHR
  • CL_SVM_TYPE_MACRO_SINGLE_DEVICE_SHARED_KHR
  • CL_SVM_TYPE_MACRO_SYSTEM_KHR

The capability macros provide a very concise and easy mechanism to request these types of SVM, for example, here is a way to allocate memory equivalent to SYCL "device USM":

    cl_uint index = CL_UINT_MAX;
    clGetSVMSuggestedTypeIndexKHR(
        context(),
        CL_SVM_TYPE_MACRO_DEVICE_KHR,
        0,
        props,
        0,
        &index);

    cl_uint* d_src = (cl_uint*)clSVMAllocWithPropertiesKHR(
        context(),
        props,
        index,
        gwx * sizeof(cl_uint),
        nullptr );
  • Can we have the ability to register an existing memory allocation to make it device accessible, similarly to cudaHostRegister?

We have this request recorded (in document issue (5)), but we are not currently considering it in scope for the initial extension revision, primarily because this functionality isn't supported by the Intel USM extension and hence isn't required to implement SYCL. Does that sound reasonable?

If we don't include it in the initial extension, we could support it via a layered extension, or a subsequent extension version.

@illuhad
Copy link

illuhad commented Nov 18, 2024

Hi Ben!

The capability macros provide a very concise and easy mechanism to request these types of SVM, for example, here is a way to allocate memory equivalent to SYCL "device USM":

Ah, I see, thanks for pointing this out. This was not very obvious to me. Perhaps other users might feel the same way?
For me this workflow would certainly work, but then I'm more of a framework vendor rather than an application developer/end user, so sensitivities might be different.

We have this request recorded (in document issue (5)), but we are not currently considering it in scope for the initial extension revision, primarily because this functionality isn't supported by the Intel USM extension and hence isn't required to implement SYCL. Does that sound reasonable?

I agree that it is not needed for SYCL (at least not today), but I would need it for C++ standard parallelism offloading rather sooner than later in order to make the stack GPU accessible. But I appreciate that the scope of individual extensions must be limited in some way to keep the work manageable in scope and size :)

Comment on lines 311 to 314
| `CL_SVM_CAPABILITY_DEVICE_READ_KHR`
| This type of SVM is accessible on the device for reading.
| `CL_SVM_CAPABILITY_DEVICE_WRITE_KHR`
| This type of SVM is writeable on the device for writing.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should clarify exactly what is meant by "accessible on the device for reading" and "accessible on the device for writing", given issues like #770.

  • Do these capabilities only apply to kernels, or do they apply to other APIs as well?

Specifically:

  • What capabilities are needed for clEnqueueSVMMemcpy? Does it matter if the SVM allocation is the memcpy source or destination?
  • What capabilities are needed for clEnqueueSVMMemFill?
  • What capabilities are needed for clEnqueueSVMMigrateMem?

Minor, there is a small typo here too: "writeable on the device for writing" should be "accessible on the device for writing".

#define CL_KERNEL_EXEC_INFO_SVM_INDIRECT_ACCESS_KHR 0x11BB
----

== Modifications to the OpenCL API Specification
Copy link

@b-sumner b-sumner Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't there any updates to section 3.3 Memory Model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants