summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorShahaf Shuler <shahafs@mellanox.com>2019-03-10 10:28:00 +0200
committerThomas Monjalon <thomas@monjalon.net>2019-03-30 16:48:56 +0100
commitc33a675b6276306db3d06e0e5b0f128ec994669c (patch)
tree1a79305162018b98b9f9c54114cec345792f13b9 /doc
parent0cbce3a167f189330f50099c5eb085f17565c0a4 (diff)
downloaddpdk-next-eventdev-c33a675b6276306db3d06e0e5b0f128ec994669c.zip
dpdk-next-eventdev-c33a675b6276306db3d06e0e5b0f128ec994669c.tar.gz
dpdk-next-eventdev-c33a675b6276306db3d06e0e5b0f128ec994669c.tar.xz
bus: introduce device level DMA memory mapping
The DPDK APIs expose 3 different modes to work with memory used for DMA: 1. Use the DPDK owned memory (backed by the DPDK provided hugepages). This memory is allocated by the DPDK libraries, included in the DPDK memory system (memseg lists) and automatically DMA mapped by the DPDK layers. 2. Use memory allocated by the user and register to the DPDK memory systems. Upon registration of memory, the DPDK layers will DMA map it to all needed devices. After registration, allocation of this memory will be done with rte_*malloc APIs. 3. Use memory allocated by the user and not registered to the DPDK memory system. This is for users who wants to have tight control on this memory (e.g. avoid the rte_malloc header). The user should create a memory, register it through rte_extmem_register API, and call DMA map function in order to register such memory to the different devices. The scope of the patch focus on #3 above. Currently the only way to map external memory is through VFIO (rte_vfio_dma_map). While VFIO is common, there are other vendors which use different ways to map memory (e.g. Mellanox and NXP). The work in this patch moves the DMA mapping to vendor agnostic APIs. Device level DMA map and unmap APIs were added. Implementation of those APIs was done currently only for PCI devices. For PCI bus devices, the pci driver can expose its own map and unmap functions to be used for the mapping. In case the driver doesn't provide any, the memory will be mapped, if possible, to IOMMU through VFIO APIs. Application usage with those APIs is quite simple: * allocate memory * call rte_extmem_register on the memory chunk. * take a device, and query its rte_device. * call the device specific mapping function for this device. Future work will deprecate the rte_vfio_dma_map and rte_vfio_dma_unmap APIs, leaving the rte device APIs as the preferred option for the user. Signed-off-by: Shahaf Shuler <shahafs@mellanox.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com> Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/guides/prog_guide/env_abstraction_layer.rst2
1 files changed, 1 insertions, 1 deletions
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 2361c3b..c134636 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -282,7 +282,7 @@ The expected workflow is as follows:
- If IOVA table is not specified, IOVA addresses will be assumed to be
unavailable
- Other processes must attach to the memory area before they can use it
-* Perform DMA mapping with ``rte_vfio_dma_map`` if needed
+* Perform DMA mapping with ``rte_dev_dma_map`` if needed
* Use the memory area in your application
* If memory area is no longer needed, it can be unregistered
- If the area was mapped for DMA, unmapping must be performed before