Message ID | 20180330065831.107558-1-junjie.j.chen@intel.com (mailing list archive) |
---|---|
State | Accepted, archived |
Headers | show |
Context | Check | Description |
---|---|---|
ci/checkpatch | success | coding style OK |
ci/Intel-compilation | fail | Compilation issues |
ci/Intel-compilation | fail | apply issues |
Hi Maxime, Junjie, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Junjie Chen > Sent: Friday, March 30, 2018 2:59 PM > To: Tan, Jianfeng <jianfeng.tan@intel.com>; maxime.coquelin@redhat.com; > mtetsuyah@gmail.com > Cc: dev@dpdk.org; Chen, Junjie J <junjie.j.chen@intel.com>; > Chen@dpdk.org > Subject: [dpdk-dev] [PATCH v3] net/vhost: fix segfault when creating vdev > dynamically > > When creating vdev dynamically, vhost pmd driver starts directly without > checking TX/RX queues are ready or not, and thus causes segmentation fault > when vhost library accesses queues. This patch adds a flag to check whether > queues are setup or not, and adds queues setup into dev_start function to > allow user to start them after setting up. > > Fixes: aed0b12930b3 ("net/vhost: fix socket file deleted on stop") > Signed-off-by: Chen, Junjie <junjie.j.chen@intel.com> > --- Thanks for Junjie's patch! I also came across the similar issue when developing virtio-user server mode. From user's perspective, the patch can fix the issue in my user case instead of the patch http://www.dpdk.org/dev/patchwork/patch/36340/ Tested-by: Zhiyong Yang <zhiyong.yang@intel.com> Thanks Zhiyong
Hi Junjie, On 03/30/2018 08:58 AM, Junjie Chen wrote: > When creating vdev dynamically, vhost pmd driver starts directly without > checking TX/RX queues are ready or not, and thus causes segmentation fault > when vhost library accesses queues. This patch adds a flag to check whether > queues are setup or not, and adds queues setup into dev_start function to > allow user to start them after setting up. > > Fixes: aed0b12930b3 ("net/vhost: fix socket file deleted on stop") > Signed-off-by: Chen, Junjie <junjie.j.chen@intel.com> > --- > Changes in v3: > - Update commit log, refine duplicated code > Changes in v2: > - Check queues status in new_device, create queue in dev_start if not setup yet > > drivers/net/vhost/rte_eth_vhost.c | 69 ++++++++++++++++++++++++++------------- > 1 file changed, 46 insertions(+), 23 deletions(-) Nice patch! Thanks for having handled the changes that quickly. Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Maxime
On 03/30/2018 09:32 AM, Yang, Zhiyong wrote: > Hi Maxime, Junjie, > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Junjie Chen >> Sent: Friday, March 30, 2018 2:59 PM >> To: Tan, Jianfeng <jianfeng.tan@intel.com>; maxime.coquelin@redhat.com; >> mtetsuyah@gmail.com >> Cc: dev@dpdk.org; Chen, Junjie J <junjie.j.chen@intel.com>; >> Chen@dpdk.org >> Subject: [dpdk-dev] [PATCH v3] net/vhost: fix segfault when creating vdev >> dynamically >> >> When creating vdev dynamically, vhost pmd driver starts directly without >> checking TX/RX queues are ready or not, and thus causes segmentation fault >> when vhost library accesses queues. This patch adds a flag to check whether >> queues are setup or not, and adds queues setup into dev_start function to >> allow user to start them after setting up. >> >> Fixes: aed0b12930b3 ("net/vhost: fix socket file deleted on stop") >> Signed-off-by: Chen, Junjie <junjie.j.chen@intel.com> >> --- > > Thanks for Junjie's patch! > > I also came across the similar issue when developing virtio-user server mode. > From user's perspective, the patch can fix the issue in my user case instead of > the patch http://www.dpdk.org/dev/patchwork/patch/36340/ > > Tested-by: Zhiyong Yang <zhiyong.yang@intel.com> Great it solves your issue, and thanks for having tested it. Maxime > Thanks > Zhiyong >
On 03/30/2018 08:58 AM, Junjie Chen wrote: > When creating vdev dynamically, vhost pmd driver starts directly without > checking TX/RX queues are ready or not, and thus causes segmentation fault > when vhost library accesses queues. This patch adds a flag to check whether > queues are setup or not, and adds queues setup into dev_start function to > allow user to start them after setting up. > > Fixes: aed0b12930b3 ("net/vhost: fix socket file deleted on stop") > Signed-off-by: Chen, Junjie <junjie.j.chen@intel.com> > --- > Changes in v3: > - Update commit log, refine duplicated code > Changes in v2: > - Check queues status in new_device, create queue in dev_start if not setup yet > > drivers/net/vhost/rte_eth_vhost.c | 69 ++++++++++++++++++++++++++------------- > 1 file changed, 46 insertions(+), 23 deletions(-) Applied to dpdk-next-virtio/master. Thanks! Maxime
Hi, On Fri, Mar 30, 2018 at 02:58:31PM +0800, Junjie Chen wrote: >When creating vdev dynamically, vhost pmd driver starts directly without >checking TX/RX queues are ready or not, and thus causes segmentation fault >when vhost library accesses queues. This patch adds a flag to check whether >queues are setup or not, and adds queues setup into dev_start function to >allow user to start them after setting up. for me, with this patch vhost enqueue/dequeue code is never called because if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) this check in eth_vhost_rx() is always true. When I revert this patch it works as usual. My testpmd cmdline is: gdb --args $RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \ --vdev 'net_vhost0,iface=/tmp/vhost-user1' \ --vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \ --portmask=f --rxq=1 --txq=1 \ --nb-cores=4 --forward-mode=io -i After starting testpmd I issue commands "set portlist 0,2,1,3", start my guest and start another testpmd issue in the guest. Another problem I see: Before this patch I could start testpmd, issue the portlist command and type "start". If I do this now I get an infinite loop of "VHOST CONFIG device not found" messages. regards, Jens
Thanks for report, I just summited this patch to fix: https://dpdk.org/dev/patchwork/patch/37765/ > > Hi, > > On Fri, Mar 30, 2018 at 02:58:31PM +0800, Junjie Chen wrote: > >When creating vdev dynamically, vhost pmd driver starts directly > >without checking TX/RX queues are ready or not, and thus causes > >segmentation fault when vhost library accesses queues. This patch adds > >a flag to check whether queues are setup or not, and adds queues setup > >into dev_start function to allow user to start them after setting up. > > for me, with this patch vhost enqueue/dequeue code is never called because > > if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) > > this check in eth_vhost_rx() is always true. > > When I revert this patch it works as usual. > > My testpmd cmdline is: > > gdb --args $RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \ > --vdev 'net_vhost0,iface=/tmp/vhost-user1' \ > --vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \ > --portmask=f --rxq=1 --txq=1 \ > --nb-cores=4 --forward-mode=io -i > > After starting testpmd I issue commands "set portlist 0,2,1,3", start my guest > and start another testpmd issue in the guest. > > Another problem I see: Before this patch I could start testpmd, issue the > portlist command and type "start". If I do this now I get an infinite loop of > "VHOST CONFIG device not found" messages. > > > regards, > Jens
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 3aae01c39..11b607650 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -117,6 +117,7 @@ struct pmd_internal { char *dev_name; char *iface_name; uint16_t max_queues; + uint16_t vid; rte_atomic32_t started; }; @@ -527,8 +528,10 @@ update_queuing_status(struct rte_eth_dev *dev) unsigned int i; int allow_queuing = 1; - if (rte_atomic32_read(&internal->started) == 0 || - rte_atomic32_read(&internal->dev_attached) == 0) + if (rte_atomic32_read(&internal->dev_attached) == 0) + return; + + if (rte_atomic32_read(&internal->started) == 0) allow_queuing = 0; /* Wait until rx/tx_pkt_burst stops accessing vhost device */ @@ -551,13 +554,36 @@ update_queuing_status(struct rte_eth_dev *dev) } } +static void +queue_setup(struct rte_eth_dev *eth_dev, struct pmd_internal *internal) +{ + struct vhost_queue *vq; + int i; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + vq = eth_dev->data->rx_queues[i]; + if (!vq) + continue; + vq->vid = internal->vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + vq = eth_dev->data->tx_queues[i]; + if (!vq) + continue; + vq->vid = internal->vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } +} + static int new_device(int vid) { struct rte_eth_dev *eth_dev; struct internal_list *list; struct pmd_internal *internal; - struct vhost_queue *vq; unsigned i; char ifname[PATH_MAX]; #ifdef RTE_LIBRTE_VHOST_NUMA @@ -580,21 +606,13 @@ new_device(int vid) eth_dev->data->numa_node = newnode; #endif - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { - vq = eth_dev->data->rx_queues[i]; - if (vq == NULL) - continue; - vq->vid = vid; - vq->internal = internal; - vq->port = eth_dev->data->port_id; - } - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { - vq = eth_dev->data->tx_queues[i]; - if (vq == NULL) - continue; - vq->vid = vid; - vq->internal = internal; - vq->port = eth_dev->data->port_id; + internal->vid = vid; + if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) { + queue_setup(eth_dev, internal); + rte_atomic32_set(&internal->dev_attached, 1); + } else { + RTE_LOG(INFO, PMD, "RX/TX queues have not setup yet\n"); + rte_atomic32_set(&internal->dev_attached, 0); } for (i = 0; i < rte_vhost_get_vring_num(vid); i++) @@ -604,7 +622,6 @@ new_device(int vid) eth_dev->data->dev_link.link_status = ETH_LINK_UP; - rte_atomic32_set(&internal->dev_attached, 1); update_queuing_status(eth_dev); RTE_LOG(INFO, PMD, "Vhost device %d created\n", vid); @@ -634,8 +651,9 @@ destroy_device(int vid) eth_dev = list->eth_dev; internal = eth_dev->data->dev_private; - rte_atomic32_set(&internal->dev_attached, 0); + rte_atomic32_set(&internal->started, 0); update_queuing_status(eth_dev); + rte_atomic32_set(&internal->dev_attached, 0); eth_dev->data->dev_link.link_status = ETH_LINK_DOWN; @@ -770,12 +788,17 @@ rte_eth_vhost_get_vid_from_port_id(uint16_t port_id) } static int -eth_dev_start(struct rte_eth_dev *dev) +eth_dev_start(struct rte_eth_dev *eth_dev) { - struct pmd_internal *internal = dev->data->dev_private; + struct pmd_internal *internal = eth_dev->data->dev_private; + + if (unlikely(rte_atomic32_read(&internal->dev_attached) == 0)) { + queue_setup(eth_dev, internal); + rte_atomic32_set(&internal->dev_attached, 1); + } rte_atomic32_set(&internal->started, 1); - update_queuing_status(dev); + update_queuing_status(eth_dev); return 0; }
When creating vdev dynamically, vhost pmd driver starts directly without checking TX/RX queues are ready or not, and thus causes segmentation fault when vhost library accesses queues. This patch adds a flag to check whether queues are setup or not, and adds queues setup into dev_start function to allow user to start them after setting up. Fixes: aed0b12930b3 ("net/vhost: fix socket file deleted on stop") Signed-off-by: Chen, Junjie <junjie.j.chen@intel.com> --- Changes in v3: - Update commit log, refine duplicated code Changes in v2: - Check queues status in new_device, create queue in dev_start if not setup yet drivers/net/vhost/rte_eth_vhost.c | 69 ++++++++++++++++++++++++++------------- 1 file changed, 46 insertions(+), 23 deletions(-)