diff options
author | Matan Azrad <matan@mellanox.com> | 2019-07-29 11:53:29 +0000 |
---|---|---|
committer | Ferruh Yigit <ferruh.yigit@intel.com> | 2019-07-29 16:54:27 +0200 |
commit | 17ed314c6c0b269c92941e41aee3622f5726e1fa (patch) | |
tree | 0846d8f55408c3f51b90de8be1ccfb8274bf24a0 /drivers/net/mlx5/mlx5_ethdev.c | |
parent | 5158260917a0588052500af4e011b6cd77143c1c (diff) | |
download | dpdk-next-eventdev-17ed314c6c0b269c92941e41aee3622f5726e1fa.zip dpdk-next-eventdev-17ed314c6c0b269c92941e41aee3622f5726e1fa.tar.gz dpdk-next-eventdev-17ed314c6c0b269c92941e41aee3622f5726e1fa.tar.xz |
net/mlx5: allow LRO per Rx queue
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.
Change the LRO offload to be per queue instead of per port.
If one of the queues is with LRO enabled, all the queues will be
configured via DevX.
If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Diffstat (limited to 'drivers/net/mlx5/mlx5_ethdev.c')
-rw-r--r-- | drivers/net/mlx5/mlx5_ethdev.c | 8 |
1 files changed, 1 insertions, 7 deletions
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 9d11831..9629cfb 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -389,7 +389,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) const uint8_t use_app_rss_key = !!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key; int ret = 0; - unsigned int lro_on = mlx5_lro_on(dev); if (use_app_rss_key && (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len != @@ -454,11 +453,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) j = 0; } } - if (lro_on && priv->config.cqe_comp) { - /* CQE compressing is not supported for LRO CQEs. */ - DRV_LOG(WARNING, "Rx CQE compression isn't supported with LRO"); - priv->config.cqe_comp = 0; - } ret = mlx5_proc_priv_init(dev); if (ret) return ret; @@ -571,7 +565,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) info->max_tx_queues = max; info->max_mac_addrs = MLX5_MAX_UC_MAC_ADDRESSES; info->rx_queue_offload_capa = mlx5_get_rx_queue_offloads(dev); - info->rx_offload_capa = (mlx5_get_rx_port_offloads(dev) | + info->rx_offload_capa = (mlx5_get_rx_port_offloads() | info->rx_queue_offload_capa); info->tx_offload_capa = mlx5_get_tx_port_offloads(dev); info->if_index = mlx5_ifindex(dev); |