[v4,4/4] dts: add test case that utilizes offload to pmd_buffer_scatter

Message ID 20240613181510.30135-5-jspewock@iol.unh.edu (mailing list archive)
State Superseded
Delegated to: Thomas Monjalon
Headers
Series Add second scatter test case |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation warning apply issues

Commit Message

Jeremy Spewock June 13, 2024, 6:15 p.m. UTC
  From: Jeremy Spewock <jspewock@iol.unh.edu>

Some NICs tested in DPDK allow for the scattering of packets without an
offload and others enforce that you enable the scattered_rx offload in
testpmd. The current version of the suite for testing support of
scattering packets only tests the case where the NIC supports testing
without the offload, so an expansion of coverage is needed to cover the
second case as well.

depends-on: patch-139227 ("dts: skip test cases based on capabilities")

Signed-off-by: Jeremy Spewock <jspewock@iol.unh.edu>
---
 dts/tests/TestSuite_pmd_buffer_scatter.py | 75 ++++++++++++++++-------
 1 file changed, 53 insertions(+), 22 deletions(-)
  

Comments

Juraj Linkeš June 19, 2024, 8:51 a.m. UTC | #1
> -    def scatter_pktgen_send_packet(self, pktsize: int) -> str:
> +    def scatter_pktgen_send_packet(self, pktsize: int) -> list[Packet]:

A note: We should make this method a part of TestSuite (so that we have 
a common way to filter packets across all test suites) in a separate 
patchset as part of https://bugs.dpdk.org/show_bug.cgi?id=1438.

>           """Generate and send a packet to the SUT then capture what is forwarded back.
>   
>           Generate an IP packet of a specific length and send it to the SUT,
> -        then capture the resulting received packet and extract its payload.
> -        The desired length of the packet is met by packing its payload
> +        then capture the resulting received packets and filter them down to the ones that have the
> +        correct layers. The desired length of the packet is met by packing its payload
>           with the letter "X" in hexadecimal.
>   
>           Args:
>               pktsize: Size of the packet to generate and send.
>   
>           Returns:
> -            The payload of the received packet as a string.
> +            The filtered down list of received packets.
>           """

<snip>

>           with testpmd_shell as testpmd:
>               testpmd.set_forward_mode(TestPmdForwardingModes.mac)
> +            # adjust the MTU of the SUT ports
> +            for port_id in range(testpmd.number_of_ports):
> +                testpmd.set_port_mtu(port_id, 9000)

For a second I thought about maybe somehow using the decorator from the 
previous patch, but that only works with testpmd methods.

But then I thought about us setting this multiple times (twice (9000, 
then back to 1500) in each test case) and that a "better" place to put 
this would be set_up_suite() (and tear_down_suite()), but that has a 
major downside of starting testpmd two more times. Having it all in one 
place in set_up_suite() would surely make the whole test suite more 
understandable, but starting testpmd multiple times is not ideal. Maybe 
we have to do it like in this patch.

I also noticed that we don't really document why we're setting MTU to 
9000. The relation between MTU and mbuf size (I think that relation is 
the reason, correct me if I'm wrong) should be better documented, 
probably in set_up_suite().

>               testpmd.start()
>   
>               for offset in [-1, 0, 1, 4, 5]:
> -                recv_payload = self.scatter_pktgen_send_packet(mbsize + offset)
> -                self._logger.debug(
> -                    f"Payload of scattered packet after forwarding: \n{recv_payload}"
> -                )
> +                recv_packets = self.scatter_pktgen_send_packet(mbsize + offset)
> +                self._logger.debug(f"Relevant captured packets: \n{recv_packets}")
> +
>                   self.verify(
> -                    ("58 " * 8).strip() in recv_payload,
> +                    any(
> +                        " ".join(["58"] * 8) in hexstr(pakt.getlayer(2), onlyhex=1)
> +                        for pakt in recv_packets
> +                    ),
>                       "Payload of scattered packet did not match expected payload with offset "
>                       f"{offset}.",
>                   )
> +            testpmd.stop()

This sneaked right back in.
  
Jeremy Spewock June 20, 2024, 7:24 p.m. UTC | #2
On Wed, Jun 19, 2024 at 4:51 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>
>
> > -    def scatter_pktgen_send_packet(self, pktsize: int) -> str:
> > +    def scatter_pktgen_send_packet(self, pktsize: int) -> list[Packet]:
>
> A note: We should make this method a part of TestSuite (so that we have
> a common way to filter packets across all test suites) in a separate
> patchset as part of https://bugs.dpdk.org/show_bug.cgi?id=1438.

That's a good idea.

>
> >           """Generate and send a packet to the SUT then capture what is forwarded back.
> >
> >           Generate an IP packet of a specific length and send it to the SUT,
> > -        then capture the resulting received packet and extract its payload.
> > -        The desired length of the packet is met by packing its payload
> > +        then capture the resulting received packets and filter them down to the ones that have the
> > +        correct layers. The desired length of the packet is met by packing its payload
> >           with the letter "X" in hexadecimal.
> >
> >           Args:
> >               pktsize: Size of the packet to generate and send.
> >
> >           Returns:
> > -            The payload of the received packet as a string.
> > +            The filtered down list of received packets.
> >           """
>
> <snip>
>
> >           with testpmd_shell as testpmd:
> >               testpmd.set_forward_mode(TestPmdForwardingModes.mac)
> > +            # adjust the MTU of the SUT ports
> > +            for port_id in range(testpmd.number_of_ports):
> > +                testpmd.set_port_mtu(port_id, 9000)
>
> For a second I thought about maybe somehow using the decorator from the
> previous patch, but that only works with testpmd methods.
>
> But then I thought about us setting this multiple times (twice (9000,
> then back to 1500) in each test case) and that a "better" place to put
> this would be set_up_suite() (and tear_down_suite()), but that has a
> major downside of starting testpmd two more times. Having it all in one
> place in set_up_suite() would surely make the whole test suite more
> understandable, but starting testpmd multiple times is not ideal. Maybe
> we have to do it like in this patch.

Right, I ended up putting it here just because the shell was already
started here so it was convenient, but setting the MTU and resetting
it multiple times is also definitely not ideal. I'm not really sure of
exactly the best way to handle it either unfortunately. Something else
I could do is have my own boolean that just tracks if the MTU has been
updated yet and only do it the first time, but then there would have
to be some kind of way to track which case is the last one to run
which is also a whole can of worms. I think overall the cost of
switching MTUs more than we need to is less than that of starting
testpmd 2 extra times with only these two test cases, but if more are
added it could end up being the opposite.

As a note though, from what I have recently seen while testing this,
this change of MTU seems like it is generally needed when you are
bound to the kernel driver while running DPDK instead of vfio-pci. One
of the parameters that is passed into testpmd in this suite is
--max-pkt-len and this adjusts the MTU of the ports before starting
testpmd. However, since some NICs use the kernel driver as their
driver for DPDK as well, this is not sufficient in all cases since the
MTU of the kernel interface is not updated by this parameter and the
packets still get dropped.  So, for example, if you start testpmd with
a Mellanox NIC bound to mlx5_core and the parameter
--max-pkt-len=9000, the MTU of the port when you do a `show port info
0` will be 8982, but if you do an `ip a` command you will see that the
network interface still shows an MTU value of 1500 and the packets
will be dropped if they exceed the MTU set on the network interface.
In all cases the MTU must be higher than 2048, so I set it using
testpmd to be agnostic of which driver you are bound to, as long as it
is a DPDK driver.

I'm not sure if this is a bug or intentional because of something that
blocks the updating of the network interface for some reason, but it
might be worth mentioning to testpmd/ethdev maintainers regardless and
I can raise it to them. If the `--max-pkt-len` parameter did update
this MTU or always allowed receiving traffic at that size then we
would not need to set the MTU in any test cases and it would be
handled by testpmd on startup. In the meantime, there has to be this
manual adjustment of MTU for the test cases to pass on any NIC that
runs DPDK on its kernel driver.

>
> I also noticed that we don't really document why we're setting MTU to
> 9000. The relation between MTU and mbuf size (I think that relation is
> the reason, correct me if I'm wrong) should be better documented,
> probably in set_up_suite().

It isn't as much to do with the relation to the mbuf size as much as
it is to test the scattering of packets you have to send and receive
packets that are greater than that mbuf size so we have to increase
the MTU to transmit those packets. Testpmd can run with the given
parameters (--mbuf-size=2048, --max-pkt-len=9000, or both together)
without the MTU change, but as I alluded to above, the MTU in testpmd
isn't always true to what the network interface says it is.

>
> >               testpmd.start()
> >
> >               for offset in [-1, 0, 1, 4, 5]:
> > -                recv_payload = self.scatter_pktgen_send_packet(mbsize + offset)
> > -                self._logger.debug(
> > -                    f"Payload of scattered packet after forwarding: \n{recv_payload}"
> > -                )
> > +                recv_packets = self.scatter_pktgen_send_packet(mbsize + offset)
> > +                self._logger.debug(f"Relevant captured packets: \n{recv_packets}")
> > +
> >                   self.verify(
> > -                    ("58 " * 8).strip() in recv_payload,
> > +                    any(
> > +                        " ".join(["58"] * 8) in hexstr(pakt.getlayer(2), onlyhex=1)
> > +                        for pakt in recv_packets
> > +                    ),
> >                       "Payload of scattered packet did not match expected payload with offset "
> >                       f"{offset}.",
> >                   )
> > +            testpmd.stop()
>
> This sneaked right back in.

It did, but this time it actually is needed. With the MTU of ports
being reset back to 1500 at the end of the test, we have to stop
packet forwarding first so that the individual ports can be stopped
for modification of their MTUs.
  
Juraj Linkeš June 21, 2024, 8:32 a.m. UTC | #3
>>>            with testpmd_shell as testpmd:
>>>                testpmd.set_forward_mode(TestPmdForwardingModes.mac)
>>> +            # adjust the MTU of the SUT ports
>>> +            for port_id in range(testpmd.number_of_ports):
>>> +                testpmd.set_port_mtu(port_id, 9000)
>>
>> For a second I thought about maybe somehow using the decorator from the
>> previous patch, but that only works with testpmd methods.
>>
>> But then I thought about us setting this multiple times (twice (9000,
>> then back to 1500) in each test case) and that a "better" place to put
>> this would be set_up_suite() (and tear_down_suite()), but that has a
>> major downside of starting testpmd two more times. Having it all in one
>> place in set_up_suite() would surely make the whole test suite more
>> understandable, but starting testpmd multiple times is not ideal. Maybe
>> we have to do it like in this patch.
> 
> Right, I ended up putting it here just because the shell was already
> started here so it was convenient, but setting the MTU and resetting
> it multiple times is also definitely not ideal. I'm not really sure of
> exactly the best way to handle it either unfortunately. Something else
> I could do is have my own boolean that just tracks if the MTU has been
> updated yet and only do it the first time, but then there would have
> to be some kind of way to track which case is the last one to run
> which is also a whole can of worms. I think overall the cost of
> switching MTUs more than we need to is less than that of starting
> testpmd 2 extra times with only these two test cases, but if more are
> added it could end up being the opposite.
> 
> As a note though, from what I have recently seen while testing this,
> this change of MTU seems like it is generally needed when you are
> bound to the kernel driver while running DPDK instead of vfio-pci. One
> of the parameters that is passed into testpmd in this suite is
> --max-pkt-len and this adjusts the MTU of the ports before starting
> testpmd. However, since some NICs use the kernel driver as their
> driver for DPDK as well, this is not sufficient in all cases since the
> MTU of the kernel interface is not updated by this parameter and the
> packets still get dropped.  So, for example, if you start testpmd with
> a Mellanox NIC bound to mlx5_core and the parameter
> --max-pkt-len=9000, the MTU of the port when you do a `show port info
> 0` will be 8982, but if you do an `ip a` command you will see that the
> network interface still shows an MTU value of 1500 and the packets
> will be dropped if they exceed the MTU set on the network interface.
> In all cases the MTU must be higher than 2048, so I set it using
> testpmd to be agnostic of which driver you are bound to, as long as it
> is a DPDK driver.
> 
> I'm not sure if this is a bug or intentional because of something that
> blocks the updating of the network interface for some reason, but it
> might be worth mentioning to testpmd/ethdev maintainers regardless and
> I can raise it to them. If the `--max-pkt-len` parameter did update
> this MTU or always allowed receiving traffic at that size then we
> would not need to set the MTU in any test cases and it would be
> handled by testpmd on startup. In the meantime, there has to be this
> manual adjustment of MTU for the test cases to pass on any NIC that
> runs DPDK on its kernel driver.
> 

This is interesting. So the "--max-pkt-len" parameter doesn't set the 
MTU in kernel if bound to a kernel driver, but the testpmd command 
("port config mtu {port_id} {mtu}") does that properly?
The most obvious thing to think would be that both should be configuring 
the command the same way. In that case, it sound like some sort of race 
condition when starting testpmd. Or something has to be done differently 
when setting MTU during init time, in which case it's not a bug but we 
should try to understand the reason. Or it could be something entirely 
different. We should talk to the maintainers or maybe look into testpmd 
code to figure out whether there's a difference in how the two ways differ.

>>
>> I also noticed that we don't really document why we're setting MTU to
>> 9000. The relation between MTU and mbuf size (I think that relation is
>> the reason, correct me if I'm wrong) should be better documented,
>> probably in set_up_suite().
> 
> It isn't as much to do with the relation to the mbuf size as much as
> it is to test the scattering of packets you have to send and receive
> packets that are greater than that mbuf size so we have to increase
> the MTU to transmit those packets. Testpmd can run with the given
> parameters (--mbuf-size=2048, --max-pkt-len=9000, or both together)
> without the MTU change, but as I alluded to above, the MTU in testpmd
> isn't always true to what the network interface says it is.
> 

That's basically what I meant by relation between MTU and mbuf size :-). 
Let's put a clear reason for increasing the MTU into the set_up_suite 
docstring: that it must be higher than mbuf so that we're receiving 
packets big enough that don't fit into just one buffer. We currently 
just say we need to "support larger packet sizes", but I'd like to be 
more explicit with a reason for needing the larger packet size and how 
large the packets actually need to be, as it may not be obvious, since 
we're setting MTU way higher than 2048 (+5).

>>
>>>                testpmd.start()
>>>
>>>                for offset in [-1, 0, 1, 4, 5]:
>>> -                recv_payload = self.scatter_pktgen_send_packet(mbsize + offset)
>>> -                self._logger.debug(
>>> -                    f"Payload of scattered packet after forwarding: \n{recv_payload}"
>>> -                )
>>> +                recv_packets = self.scatter_pktgen_send_packet(mbsize + offset)
>>> +                self._logger.debug(f"Relevant captured packets: \n{recv_packets}")
>>> +
>>>                    self.verify(
>>> -                    ("58 " * 8).strip() in recv_payload,
>>> +                    any(
>>> +                        " ".join(["58"] * 8) in hexstr(pakt.getlayer(2), onlyhex=1)
>>> +                        for pakt in recv_packets
>>> +                    ),
>>>                        "Payload of scattered packet did not match expected payload with offset "
>>>                        f"{offset}.",
>>>                    )
>>> +            testpmd.stop()
>>
>> This sneaked right back in.
> 
> It did, but this time it actually is needed. With the MTU of ports
> being reset back to 1500 at the end of the test, we have to stop
> packet forwarding first so that the individual ports can be stopped
> for modification of their MTUs.

Oh, we can't modify the MTU while the port is running. I missed that, 
thanks.
  

Patch

diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py
index 645a66b607..f7bdd4fbcf 100644
--- a/dts/tests/TestSuite_pmd_buffer_scatter.py
+++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
@@ -16,14 +16,19 @@ 
 """
 
 import struct
+from typing import ClassVar
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
-from scapy.packet import Raw  # type: ignore[import]
+from scapy.packet import Packet, Raw  # type: ignore[import]
 from scapy.utils import hexstr  # type: ignore[import]
 
-from framework.remote_session.testpmd_shell import TestPmdForwardingModes, TestPmdShell
-from framework.test_suite import TestSuite
+from framework.remote_session.testpmd_shell import (
+    NicCapability,
+    TestPmdForwardingModes,
+    TestPmdShell,
+)
+from framework.test_suite import TestSuite, requires
 
 
 class TestPmdBufferScatter(TestSuite):
@@ -48,6 +53,14 @@  class TestPmdBufferScatter(TestSuite):
        and a single byte of packet data stored in a second buffer alongside the CRC.
     """
 
+    #: Parameters for testing scatter using testpmd which are universal across all test cases.
+    base_testpmd_parameters: ClassVar[list[str]] = [
+        "--mbcache=200",
+        "--max-pkt-len=9000",
+        "--port-topology=paired",
+        "--tx-offloads=0x00008000",
+    ]
+
     def set_up_suite(self) -> None:
         """Set up the test suite.
 
@@ -64,19 +77,19 @@  def set_up_suite(self) -> None:
         self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_egress)
         self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_ingress)
 
-    def scatter_pktgen_send_packet(self, pktsize: int) -> str:
+    def scatter_pktgen_send_packet(self, pktsize: int) -> list[Packet]:
         """Generate and send a packet to the SUT then capture what is forwarded back.
 
         Generate an IP packet of a specific length and send it to the SUT,
-        then capture the resulting received packet and extract its payload.
-        The desired length of the packet is met by packing its payload
+        then capture the resulting received packets and filter them down to the ones that have the
+        correct layers. The desired length of the packet is met by packing its payload
         with the letter "X" in hexadecimal.
 
         Args:
             pktsize: Size of the packet to generate and send.
 
         Returns:
-            The payload of the received packet as a string.
+            The filtered down list of received packets.
         """
         packet = Ether() / IP() / Raw()
         packet.getlayer(2).load = ""
@@ -86,51 +99,69 @@  def scatter_pktgen_send_packet(self, pktsize: int) -> str:
         for X_in_hex in payload:
             packet.load += struct.pack("=B", int("%s%s" % (X_in_hex[0], X_in_hex[1]), 16))
         received_packets = self.send_packet_and_capture(packet)
+        # filter down the list to packets that have the appropriate structure
+        received_packets = list(
+            filter(lambda p: Ether in p and IP in p and Raw in p, received_packets)
+        )
         self.verify(len(received_packets) > 0, "Did not receive any packets.")
-        load = hexstr(received_packets[0].getlayer(2), onlyhex=1)
 
-        return load
+        return received_packets
 
-    def pmd_scatter(self, mbsize: int) -> None:
+    def pmd_scatter(self, mbsize: int, extra_testpmd_params: list[str] = []) -> None:
         """Testpmd support of receiving and sending scattered multi-segment packets.
 
         Support for scattered packets is shown by sending 5 packets of differing length
         where the length of the packet is calculated by taking mbuf-size + an offset.
         The offsets used in the test are -1, 0, 1, 4, 5 respectively.
 
+        Args:
+            mbsize: Size to set memory buffers to when starting testpmd.
+            extra_testpmd_params: Additional parameters to add to the base list when starting
+                testpmd.
+
         Test:
-            Start testpmd and run functional test with preset mbsize.
+            Start testpmd and run functional test with preset `mbsize`.
         """
         testpmd_shell = self.sut_node.create_interactive_shell(
             TestPmdShell,
-            app_parameters=(
-                "--mbcache=200 "
-                f"--mbuf-size={mbsize} "
-                "--max-pkt-len=9000 "
-                "--port-topology=paired "
-                "--tx-offloads=0x00008000"
+            app_parameters=" ".join(
+                [*self.base_testpmd_parameters, f"--mbuf-size={mbsize}", *extra_testpmd_params]
             ),
             privileged=True,
         )
         with testpmd_shell as testpmd:
             testpmd.set_forward_mode(TestPmdForwardingModes.mac)
+            # adjust the MTU of the SUT ports
+            for port_id in range(testpmd.number_of_ports):
+                testpmd.set_port_mtu(port_id, 9000)
             testpmd.start()
 
             for offset in [-1, 0, 1, 4, 5]:
-                recv_payload = self.scatter_pktgen_send_packet(mbsize + offset)
-                self._logger.debug(
-                    f"Payload of scattered packet after forwarding: \n{recv_payload}"
-                )
+                recv_packets = self.scatter_pktgen_send_packet(mbsize + offset)
+                self._logger.debug(f"Relevant captured packets: \n{recv_packets}")
+
                 self.verify(
-                    ("58 " * 8).strip() in recv_payload,
+                    any(
+                        " ".join(["58"] * 8) in hexstr(pakt.getlayer(2), onlyhex=1)
+                        for pakt in recv_packets
+                    ),
                     "Payload of scattered packet did not match expected payload with offset "
                     f"{offset}.",
                 )
+            testpmd.stop()
+            # reset the MTU of the SUT ports
+            for port_id in range(testpmd.number_of_ports):
+                testpmd.set_port_mtu(port_id, 1500)
 
+    @requires(NicCapability.scattered_rx)
     def test_scatter_mbuf_2048(self) -> None:
         """Run the :meth:`pmd_scatter` test with `mbsize` set to 2048."""
         self.pmd_scatter(mbsize=2048)
 
+    def test_scatter_mbuf_2048_with_offload(self) -> None:
+        """Run the :meth:`pmd_scatter` test with `mbsize` set to 2048 and rx_scatter offload."""
+        self.pmd_scatter(mbsize=2048, extra_testpmd_params=["--enable-scatter"])
+
     def tear_down_suite(self) -> None:
         """Tear down the test suite.