[V7] framework/qemu_kvm: pin VM's threads to vhost CPU lcore

Message ID 20221222014838.173362-1-weix.ling@intel.com (mailing list archive)
State Accepted
Headers
Series [V7] framework/qemu_kvm: pin VM's threads to vhost CPU lcore |

Checks

Context Check Description
ci/Intel-dts-format-test success Testing OK
ci/Intel-dts-pylama-test success Testing OK
ci/Intel-dts-suite-test success Testing OK

Commit Message

Ling, WeiX Dec. 22, 2022, 1:48 a.m. UTC
  1)Pin VM's threads to vhost CPU lcore after start VM.
2)Fix add_vm_daemon method issue.
3)Modify the pin_threads() to pin the VM's threads to vhost CPU lcores.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 framework/qemu_kvm.py | 41 ++++++++++++++++++++++++++++++++---------
 1 file changed, 32 insertions(+), 9 deletions(-)
  

Comments

Tu, Lijuan Dec. 22, 2022, 8:26 a.m. UTC | #1
On Thu, 22 Dec 2022 09:48:38 +0800, Wei Ling <weix.ling@intel.com> wrote:
> 1)Pin VM's threads to vhost CPU lcore after start VM.
> 2)Fix add_vm_daemon method issue.
> 3)Modify the pin_threads() to pin the VM's threads to vhost CPU lcores.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>

Reviewed-by: Lijuan Tu <lijuan.tu@intel.com>
Applied, thanks
  

Patch

diff --git a/framework/qemu_kvm.py b/framework/qemu_kvm.py
index 20aa8008..dd8e7857 100644
--- a/framework/qemu_kvm.py
+++ b/framework/qemu_kvm.py
@@ -1241,7 +1241,7 @@  class QEMUKvm(VirtBase):
                 By default VM will start with the daemonize status.
                 Not support starting it on the stdin now.
         """
-        if "daemon" in list(options.keys()) and options["enable"] == "no":
+        if "enable" in list(options.keys()) and options["enable"] == "no":
             pass
         else:
             daemon_boot_line = "-daemonize"
@@ -1377,6 +1377,10 @@  class QEMUKvm(VirtBase):
 
         self.__get_pci_mapping()
 
+        # pin VM threads with host CPU cores
+        lcores = self.vcpus_pinned_to_vm.split(" ")
+        self.pin_threads(lcores=lcores)
+
         # query status
         self.update_status()
 
@@ -2004,13 +2008,32 @@  class QEMUKvm(VirtBase):
     def pin_threads(self, lcores):
         """
         Pin thread to assigned cores
-        """
-        thread_reg = r"CPU #(\d+): .* thread_id=(\d+)"
+        If threads <= lcores, like: threads=[427756, 427757], lcores=[48, 49, 50]:
+        taskset -pc 48 427756
+        taskset -pc 49 427757
+
+        If threads > lcores, like threads=[427756, 427757, 427758, 427759, 427760], lcores=[48,49,50]
+        taskset -pc 48 427756
+        taskset -pc 49 427757
+        taskset -pc 50 427758
+        taskset -pc 48 427759
+        taskset -pc 49 427760
+        """
+        thread_reg = r"CPU #\d+: thread_id=(\d+)"
         output = self.__monitor_session("info", "cpus")
-        thread_cores = re.findall(thread_reg, output)
-        cores_map = list(zip(thread_cores, lcores))
-        for thread_info, core_id in cores_map:
-            cpu_id, thread_id = thread_info
-            self.host_session.send_expect(
-                "taskset -pc %d %s" % (core_id, thread_id), "#"
+        threads = re.findall(thread_reg, output)
+        if len(threads) <= len(lcores):
+            map = list(zip(threads, lcores))
+        else:
+            self.host_logger.warning(
+                "lcores is less than VM's threads, 1 lcore will pin multiple VM's threads"
             )
+            lcore_len = len(lcores)
+            for item in threads:
+                thread_idx = threads.index(item)
+                if thread_idx >= lcore_len:
+                    lcore_idx = thread_idx % lcore_len
+                    lcores.append(lcores[lcore_idx])
+            map = list(zip(threads, lcores))
+        for thread, lcore in map:
+            self.host_session.send_expect("taskset -pc %s %s" % (lcore, thread), "#")