[dpdk-dev,v3,4/9] vhost: Add API to get MTU value
Checks
Commit Message
This patch implements the function for the application to
get the MTU value.
rte_vhost_get_mtu() fills the mtu parameter with the MTU value
set in QEMU if VIRTIO_NET_F_MTU has been negotiated and returns 0,
-ENOTSUP otherwise.
The function returns -EAGAIN if Virtio feature negotiation
didn't happened yet.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/librte_vhost/rte_virtio_net.h | 15 +++++++++++++++
lib/librte_vhost/vhost.c | 19 +++++++++++++++++++
2 files changed, 34 insertions(+)
Comments
On Sun, Mar 12, 2017 at 05:34:01PM +0100, Maxime Coquelin wrote:
> This patch implements the function for the application to
> get the MTU value.
I'm thinking the other way. As we are making vhost being generic, it
doesn't make too much sense now to store a net specific value at vhost
layer. I'm thinking we may could introduce a vhost-user request handler,
something like:
rte_vhost_driver_register_msg_handler(socket, handler);
All vhost-user message then will goto the driver's sepcific handler.
if it's handlered, do nothing in vhost lib. Otherwise, we handle it
in vhost lib.
In this way, you could handle the mtu message inside vhost-pmd; thus,
there is no need to introduce one more (net specific) API.
Thoughts?
--yliu
On 03/16/2017 09:00 AM, Yuanhan Liu wrote:
> On Sun, Mar 12, 2017 at 05:34:01PM +0100, Maxime Coquelin wrote:
>> This patch implements the function for the application to
>> get the MTU value.
>
> I'm thinking the other way. As we are making vhost being generic, it
> doesn't make too much sense now to store a net specific value at vhost
> layer. I'm thinking we may could introduce a vhost-user request handler,
> something like:
>
> rte_vhost_driver_register_msg_handler(socket, handler);
That's a good point.
> All vhost-user message then will goto the driver's sepcific handler.
> if it's handlered, do nothing in vhost lib. Otherwise, we handle it
> in vhost lib.
>
> In this way, you could handle the mtu message inside vhost-pmd; thus,
> there is no need to introduce one more (net specific) API.
>
> Thoughts?
I need to think more about it, but advantage of having a dedicated API
is that in case the MTU value is not available, you can know from
return code whether it is not yet available (-EAGAIN), or not supported
(-ENOTSUP).
That could be managed with the callback tough, by calling the callback
with a 0 mtu value if not supported, so that the application can be
sure that if the callback hasn't been called, then it is just that it
is not ready yet.
What do you think?
Maxime
On Thu, Mar 16, 2017 at 12:37:23PM +0100, Maxime Coquelin wrote:
>
>
> On 03/16/2017 09:00 AM, Yuanhan Liu wrote:
> >On Sun, Mar 12, 2017 at 05:34:01PM +0100, Maxime Coquelin wrote:
> >>This patch implements the function for the application to
> >>get the MTU value.
> >
> >I'm thinking the other way. As we are making vhost being generic, it
> >doesn't make too much sense now to store a net specific value at vhost
> >layer. I'm thinking we may could introduce a vhost-user request handler,
> >something like:
> >
> > rte_vhost_driver_register_msg_handler(socket, handler);
>
> That's a good point.
>
> >All vhost-user message then will goto the driver's sepcific handler.
> >if it's handlered, do nothing in vhost lib. Otherwise, we handle it
> >in vhost lib.
> >
> >In this way, you could handle the mtu message inside vhost-pmd; thus,
> >there is no need to introduce one more (net specific) API.
> >
> >Thoughts?
>
> I need to think more about it, but advantage of having a dedicated API
> is that in case the MTU value is not available, you can know from
> return code whether it is not yet available (-EAGAIN), or not supported
> (-ENOTSUP).
>
> That could be managed with the callback tough, by calling the callback
> with a 0 mtu value if not supported, so that the application can be
> sure that if the callback hasn't been called, then it is just that it
> is not ready yet.
>
> What do you think?
I don't think the application should even be aware of the callback.
Application should get the mtu from the ethdev layer, by the API
rte_eth_dev_get_mtu(). And such MTU request should be only handled
in vhost-pmd, to serve the rte_eth_dev_get_mtu() API.
--yliu
On 03/17/2017 06:32 AM, Yuanhan Liu wrote:
> On Thu, Mar 16, 2017 at 12:37:23PM +0100, Maxime Coquelin wrote:
>>
>>
>> On 03/16/2017 09:00 AM, Yuanhan Liu wrote:
>>> On Sun, Mar 12, 2017 at 05:34:01PM +0100, Maxime Coquelin wrote:
>>>> This patch implements the function for the application to
>>>> get the MTU value.
>>>
>>> I'm thinking the other way. As we are making vhost being generic, it
>>> doesn't make too much sense now to store a net specific value at vhost
>>> layer. I'm thinking we may could introduce a vhost-user request handler,
>>> something like:
>>>
>>> rte_vhost_driver_register_msg_handler(socket, handler);
>>
>> That's a good point.
>>
>>> All vhost-user message then will goto the driver's sepcific handler.
>>> if it's handlered, do nothing in vhost lib. Otherwise, we handle it
>>> in vhost lib.
>>>
>>> In this way, you could handle the mtu message inside vhost-pmd; thus,
>>> there is no need to introduce one more (net specific) API.
>>>
>>> Thoughts?
>>
>> I need to think more about it, but advantage of having a dedicated API
>> is that in case the MTU value is not available, you can know from
>> return code whether it is not yet available (-EAGAIN), or not supported
>> (-ENOTSUP).
>>
>> That could be managed with the callback tough, by calling the callback
>> with a 0 mtu value if not supported, so that the application can be
>> sure that if the callback hasn't been called, then it is just that it
>> is not ready yet.
>>
>> What do you think?
>
> I don't think the application should even be aware of the callback.
> Application should get the mtu from the ethdev layer, by the API
> rte_eth_dev_get_mtu(). And such MTU request should be only handled
> in vhost-pmd, to serve the rte_eth_dev_get_mtu() API.
I thought about OVS, which doesn't rely on Vhost PMD (at least didn't
last time I checked).
Maxime
On Fri, Mar 17, 2017 at 10:59:12AM +0100, Maxime Coquelin wrote:
>
>
> On 03/17/2017 06:32 AM, Yuanhan Liu wrote:
> >On Thu, Mar 16, 2017 at 12:37:23PM +0100, Maxime Coquelin wrote:
> >>
> >>
> >>On 03/16/2017 09:00 AM, Yuanhan Liu wrote:
> >>>On Sun, Mar 12, 2017 at 05:34:01PM +0100, Maxime Coquelin wrote:
> >>>>This patch implements the function for the application to
> >>>>get the MTU value.
> >>>
> >>>I'm thinking the other way. As we are making vhost being generic, it
> >>>doesn't make too much sense now to store a net specific value at vhost
> >>>layer. I'm thinking we may could introduce a vhost-user request handler,
> >>>something like:
> >>>
> >>> rte_vhost_driver_register_msg_handler(socket, handler);
> >>
> >>That's a good point.
> >>
> >>>All vhost-user message then will goto the driver's sepcific handler.
> >>>if it's handlered, do nothing in vhost lib. Otherwise, we handle it
> >>>in vhost lib.
> >>>
> >>>In this way, you could handle the mtu message inside vhost-pmd; thus,
> >>>there is no need to introduce one more (net specific) API.
> >>>
> >>>Thoughts?
> >>
> >>I need to think more about it, but advantage of having a dedicated API
> >>is that in case the MTU value is not available, you can know from
> >>return code whether it is not yet available (-EAGAIN), or not supported
> >>(-ENOTSUP).
> >>
> >>That could be managed with the callback tough, by calling the callback
> >>with a 0 mtu value if not supported, so that the application can be
> >>sure that if the callback hasn't been called, then it is just that it
> >>is not ready yet.
> >>
> >>What do you think?
> >
> >I don't think the application should even be aware of the callback.
> >Application should get the mtu from the ethdev layer, by the API
> >rte_eth_dev_get_mtu(). And such MTU request should be only handled
> >in vhost-pmd, to serve the rte_eth_dev_get_mtu() API.
>
> I thought about OVS, which doesn't rely on Vhost PMD (at least didn't
> last time I checked).
Oh, right. Let's keep this API then.
--yliu
@@ -100,6 +100,21 @@ int rte_vhost_driver_callback_register(struct virtio_net_device_ops const * cons
int rte_vhost_driver_session_start(void);
/**
+ * Get the MTU value of the device if set in QEMU.
+ *
+ * @param vid
+ * virtio-net device ID
+ * @param mtu
+ * The variable to store the MTU value
+ *
+ * @return
+ * 0: success
+ * -EAGAIN: device not yet started
+ * -ENOTSUP: device does not support MTU feature
+ */
+int rte_vhost_get_mtu(int vid, uint16_t *mtu);
+
+/**
* Get the numa node from which the virtio net device's memory
* is allocated.
*
@@ -313,6 +313,25 @@ vhost_enable_dequeue_zero_copy(int vid)
}
int
+rte_vhost_get_mtu(int vid, uint16_t *mtu)
+{
+ struct virtio_net *dev = get_device(vid);
+
+ if (!dev)
+ return -ENODEV;
+
+ if (!(dev->flags & VIRTIO_DEV_READY))
+ return -EAGAIN;
+
+ if (!(dev->features & VIRTIO_NET_F_MTU))
+ return -ENOTSUP;
+
+ *mtu = dev->mtu;
+
+ return 0;
+}
+
+int
rte_vhost_get_numa_node(int vid)
{
#ifdef RTE_LIBRTE_VHOST_NUMA