[RFC] vhost: inroduce operation to get vDPA queue stats
diff mbox series

Message ID 1584371573-4243-1-git-send-email-matan@mellanox.com
State Superseded, archived
Delegated to: Maxime Coquelin
Headers show
Series
  • [RFC] vhost: inroduce operation to get vDPA queue stats
Related show

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/checkpatch warning coding style issues

Commit Message

Matan Azrad March 16, 2020, 3:12 p.m. UTC
The vDPA device offloads all the datapath of the vhost device to the HW
device.

In order to expose to the user traffic information this patch introduce
new API to get traffic statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW
device.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 lib/librte_vhost/rte_vdpa.h            | 45 +++++++++++++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map |  1 +
 lib/librte_vhost/vdpa.c                | 14 +++++++++++
 3 files changed, 59 insertions(+), 1 deletion(-)


The statistics are very important to reflect some information to the application on traffic.
Please review it and comment.

Thanks!

Comments

Maxime Coquelin April 9, 2020, 2:34 p.m. UTC | #1
Hi Matan,

On 3/16/20 4:12 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW
> device.
> 
> In order to expose to the user traffic information this patch introduce
> new API to get traffic statistics per virtio queue.
> 
> The statistics are taken directly from the vDPA driver managing the HW
> device.

Shouldn't the stats be collected from the PF like for "legacy" SR-IOV
via the sysfs?
Something like:
/sys/class/net/<PF name>/device/sriov/<VF ID>/stats/

Regards,
Maxime
Matan Azrad April 10, 2020, 1:54 p.m. UTC | #2
Hi Maxime

From: Maxime Coquelin
> Hi Matan,
> 
> On 3/16/20 4:12 PM, Matan Azrad wrote:
> > The vDPA device offloads all the datapath of the vhost device to the
> > HW device.
> >
> > In order to expose to the user traffic information this patch
> > introduce new API to get traffic statistics per virtio queue.
> >
> > The statistics are taken directly from the vDPA driver managing the HW
> > device.
> 
> Shouldn't the stats be collected from the PF like for "legacy" SR-IOV via the
> sysfs?
> Something like:
> /sys/class/net/<PF name>/device/sriov/<VF ID>/stats/

Yes, probably, statistics can be taken from sysfs.

But we are introducing a DPDK API to get the virtio statistics.
So, driver which can take statistics from there can take them from sysfs in the driver operation.  

In addition, the API includes per virtio queue statistics and in virtio data-path terms, it is very helpful for debuggability. 


> Regards,
> Maxime
Maxime Coquelin April 10, 2020, 2:04 p.m. UTC | #3
On 4/10/20 3:54 PM, Matan Azrad wrote:
> Hi Maxime
> 
> From: Maxime Coquelin
>> Hi Matan,
>>
>> On 3/16/20 4:12 PM, Matan Azrad wrote:
>>> The vDPA device offloads all the datapath of the vhost device to the
>>> HW device.
>>>
>>> In order to expose to the user traffic information this patch
>>> introduce new API to get traffic statistics per virtio queue.
>>>
>>> The statistics are taken directly from the vDPA driver managing the HW
>>> device.
>>
>> Shouldn't the stats be collected from the PF like for "legacy" SR-IOV via the
>> sysfs?
>> Something like:
>> /sys/class/net/<PF name>/device/sriov/<VF ID>/stats/
> 
> Yes, probably, statistics can be taken from sysfs.
> 
> But we are introducing a DPDK API to get the virtio statistics.
> So, driver which can take statistics from there can take them from sysfs in the driver operation.  
> 
> In addition, the API includes per virtio queue statistics and in virtio data-path terms, it is very helpful for debuggability.

Ok, the per queue stats is a valid point for queue debug.

Even if we add this API, I still think we should ensure the stats will
be available through the sysfs, so that it makes Kubernetes integration
seemless wrt to legacy SR-IOV[0].

Also, it would provide a single way to get statistics for both DPDK and
Kernel vDPA.

Thanks,
Maxime

> 
> 
>> Regards,
>> Maxime
>
Matan Azrad April 10, 2020, 2:15 p.m. UTC | #4
From: Maxime Coquelin
> Sent: Friday, April 10, 2020 5:04 PM
> To: Matan Azrad <matan@mellanox.com>; dev@dpdk.org
> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>; Xiao Wang <xiao.w.wang@intel.com>; Tiwei Bie
> <tiwei.bie@intel.com>
> Subject: Re: [PATCH] [RFC] vhost: inroduce operation to get vDPA queue
> stats
> 
> 
> 
> On 4/10/20 3:54 PM, Matan Azrad wrote:
> > Hi Maxime
> >
> > From: Maxime Coquelin
> >> Hi Matan,
> >>
> >> On 3/16/20 4:12 PM, Matan Azrad wrote:
> >>> The vDPA device offloads all the datapath of the vhost device to the
> >>> HW device.
> >>>
> >>> In order to expose to the user traffic information this patch
> >>> introduce new API to get traffic statistics per virtio queue.
> >>>
> >>> The statistics are taken directly from the vDPA driver managing the
> >>> HW device.
> >>
> >> Shouldn't the stats be collected from the PF like for "legacy" SR-IOV
> >> via the sysfs?
> >> Something like:
> >> /sys/class/net/<PF name>/device/sriov/<VF ID>/stats/
> >
> > Yes, probably, statistics can be taken from sysfs.
> >
> > But we are introducing a DPDK API to get the virtio statistics.
> > So, driver which can take statistics from there can take them from sysfs in
> the driver operation.
> >
> > In addition, the API includes per virtio queue statistics and in virtio data-
> path terms, it is very helpful for debuggability.
> 
> Ok, the per queue stats is a valid point for queue debug.
> 
> Even if we add this API, I still think we should ensure the stats will be
> available through the sysfs, so that it makes Kubernetes integration seemless
> wrt to legacy SR-IOV[0].

For Mellanox legacy SRIOV the sysfs stats you mentioned are available.

> Also, it would provide a single way to get statistics for both DPDK and Kernel
> vDPA.

Sure.

> Thanks,
> Maxime
> 
> >
> >
> >> Regards,
> >> Maxime
> >
Maxime Coquelin April 10, 2020, 2:20 p.m. UTC | #5
On 4/10/20 4:15 PM, Matan Azrad wrote:
> 
> 
> From: Maxime Coquelin
>> Sent: Friday, April 10, 2020 5:04 PM
>> To: Matan Azrad <matan@mellanox.com>; dev@dpdk.org
>> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Shahaf Shuler
>> <shahafs@mellanox.com>; Xiao Wang <xiao.w.wang@intel.com>; Tiwei Bie
>> <tiwei.bie@intel.com>
>> Subject: Re: [PATCH] [RFC] vhost: inroduce operation to get vDPA queue
>> stats
>>
>>
>>
>> On 4/10/20 3:54 PM, Matan Azrad wrote:
>>> Hi Maxime
>>>
>>> From: Maxime Coquelin
>>>> Hi Matan,
>>>>
>>>> On 3/16/20 4:12 PM, Matan Azrad wrote:
>>>>> The vDPA device offloads all the datapath of the vhost device to the
>>>>> HW device.
>>>>>
>>>>> In order to expose to the user traffic information this patch
>>>>> introduce new API to get traffic statistics per virtio queue.
>>>>>
>>>>> The statistics are taken directly from the vDPA driver managing the
>>>>> HW device.
>>>>
>>>> Shouldn't the stats be collected from the PF like for "legacy" SR-IOV
>>>> via the sysfs?
>>>> Something like:
>>>> /sys/class/net/<PF name>/device/sriov/<VF ID>/stats/
>>>
>>> Yes, probably, statistics can be taken from sysfs.
>>>
>>> But we are introducing a DPDK API to get the virtio statistics.
>>> So, driver which can take statistics from there can take them from sysfs in
>> the driver operation.
>>>
>>> In addition, the API includes per virtio queue statistics and in virtio data-
>> path terms, it is very helpful for debuggability.
>>
>> Ok, the per queue stats is a valid point for queue debug.
>>
>> Even if we add this API, I still think we should ensure the stats will be
>> available through the sysfs, so that it makes Kubernetes integration seemless
>> wrt to legacy SR-IOV[0].

Just noticed I missed to paste the link:

[0]:
https://github.com/killianmuldoon/node_exporter/commit/3e0b931e6526e68aa7192e6bca31ceb892d62754

> 
> For Mellanox legacy SRIOV the sysfs stats you mentioned are available.

Cool! I'll review your series next week.

Thanks,
Maxime

>> Also, it would provide a single way to get statistics for both DPDK and Kernel
>> vDPA.
> 
> Sure.
> 
>> Thanks,
>> Maxime
>>
>>>
>>>
>>>> Regards,
>>>> Maxime
>>>
>

Patch
diff mbox series

diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index 9a3deb3..595da76 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -37,6 +37,27 @@  struct rte_vdpa_dev_addr {
 	};
 };
 
+struct rte_vdpa_queue_stats {
+	/** Number of descriptors received by device */
+	uint64_t recieved_desc;
+	/** Number of descriptors completed by the device */
+	uint64_t completed_desc;
+	/** Number of bad descriptors received by device */
+	uint32_t bad_desc;
+	/**
+	 * Number of chained descriptors received that exceed the max allowed
+	 * chain by device
+	 */
+	uint32_t exceed_max_chain;
+	/**
+	 * Number of times device tried to read or write buffer that is not
+	 * registered to the device
+	 */
+	uint32_t invalid_buffer;
+	/** Number of errors detected by the device */
+	uint32_t errors;
+};
+
 /**
  * vdpa device operations
  */
@@ -73,8 +94,11 @@  struct rte_vdpa_dev_ops {
 	int (*get_notify_area)(int vid, int qid,
 			uint64_t *offset, uint64_t *size);
 
+	/** Get statistics of the queue */
+	int (*get_stats)(int did, int qid, struct rte_vdpa_queue_stats *stats);
+
 	/** Reserved for future extension */
-	void *reserved[5];
+	void *reserved[4];
 };
 
 /**
@@ -200,4 +224,23 @@  struct rte_vdpa_device *
 __rte_experimental
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get vDPA device queue statistics.
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @param stats
+ *  queue statistics pointer.
+ * @return
+ *  0 on success, non-zero on failure.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats);
 #endif /* _RTE_VDPA_H_ */
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 051d08c..c9dcff4 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -38,6 +38,7 @@  EXPERIMENTAL {
 	rte_vdpa_find_device_id;
 	rte_vdpa_get_device;
 	rte_vdpa_get_device_num;
+	rte_vdpa_get_stats;
 	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_get_vdpa_device_id;
diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
index 2b86708..57900fc 100644
--- a/lib/librte_vhost/vdpa.c
+++ b/lib/librte_vhost/vdpa.c
@@ -227,3 +227,17 @@  struct rte_vdpa_device *
 		free_ind_table(idesc);
 	return -1;
 }
+
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats(did, qid, stats);
+}