[v2] sched: fix port time rounding error

Message ID 20200625095930.18429-1-alan.dewar@att.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series [v2] sched: fix port time rounding error |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-nxp-Performance success Performance Testing PASS
ci/travis-robot success Travis build: passed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-testing success Testing PASS

Commit Message

Alan Dewar June 25, 2020, 9:59 a.m. UTC
  From: Alan Dewar <alan.dewar@att.com>

The QoS scheduler works off port time that is computed from the number
of CPU cycles that have elapsed since the last time the port was
polled.   It divides the number of elapsed cycles to calculate how
many bytes can be sent, however this division can generate rounding
errors, where some fraction of a byte sent may be lost.

Lose enough of these fractional bytes and the QoS scheduler
underperforms.  The problem is worse with low bandwidths.

To compensate for this rounding error this fix doesn't advance the
port's time_cpu_cycles by the number of cycles that have elapsed,
but by multiplying the computed number of bytes that can be sent
(which has been rounded down) by number of cycles per byte.
This will mean that port's time_cpu_cycles will lag behind the CPU
cycles momentarily.  At the next poll, the lag will be taken into
account.

v2:
If the cycles value wraps (100 year+) reset the port's cpu cycle back
to zero.

Fixes: de3cfa2c98 ("sched: initial import")

Signed-off-by: Alan Dewar <alan.dewar@att.com>
---
 lib/librte_sched/rte_sched.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)
  

Comments

Thomas Monjalon July 5, 2020, 8:41 p.m. UTC | #1
Cristian, Jasvinder, please review.

25/06/2020 11:59, alangordondewar@gmail.com:
> From: Alan Dewar <alan.dewar@att.com>
> 
> The QoS scheduler works off port time that is computed from the number
> of CPU cycles that have elapsed since the last time the port was
> polled.   It divides the number of elapsed cycles to calculate how
> many bytes can be sent, however this division can generate rounding
> errors, where some fraction of a byte sent may be lost.
> 
> Lose enough of these fractional bytes and the QoS scheduler
> underperforms.  The problem is worse with low bandwidths.
> 
> To compensate for this rounding error this fix doesn't advance the
> port's time_cpu_cycles by the number of cycles that have elapsed,
> but by multiplying the computed number of bytes that can be sent
> (which has been rounded down) by number of cycles per byte.
> This will mean that port's time_cpu_cycles will lag behind the CPU
> cycles momentarily.  At the next poll, the lag will be taken into
> account.
> 
> v2:
> If the cycles value wraps (100 year+) reset the port's cpu cycle back
> to zero.
> 
> Fixes: de3cfa2c98 ("sched: initial import")
> 
> Signed-off-by: Alan Dewar <alan.dewar@att.com>
> ---
>  lib/librte_sched/rte_sched.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index c0983ddda..7c022cd61 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -222,6 +222,7 @@ struct rte_sched_port {
>  	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes */
>  	uint64_t time;                /* Current NIC TX time measured in bytes */
>  	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
> +	uint64_t cycles_per_byte;
>  
>  	/* Grinders */
>  	struct rte_mbuf **pkts_out;
> @@ -852,6 +853,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
>  	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
>  		/ params->rate;
>  	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
> +	port->cycles_per_byte = cycles_per_byte;
>  
>  	/* Grinders */
>  	port->pkts_out = NULL;
> @@ -2673,16 +2675,21 @@ static inline void
>  rte_sched_port_time_resync(struct rte_sched_port *port)
>  {
>  	uint64_t cycles = rte_get_tsc_cycles();
> -	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
> +	uint64_t cycles_diff;
>  	uint64_t bytes_diff;
>  	uint32_t i;
>  
> +	if (cycles < port->time_cpu_cycles)
> +		port->time_cpu_cycles = 0;
> +
> +	cycles_diff = cycles - port->time_cpu_cycles;
>  	/* Compute elapsed time in bytes */
>  	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
>  					   port->inv_cycles_per_byte);
>  
>  	/* Advance port time */
> -	port->time_cpu_cycles = cycles;
> +	port->time_cpu_cycles +=
> +		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
>  	port->time_cpu_bytes += bytes_diff;
>  	if (port->time < port->time_cpu_bytes)
>  		port->time = port->time_cpu_bytes;
>
  
Jasvinder Singh July 6, 2020, 9:20 p.m. UTC | #2
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, July 5, 2020 9:42 PM
> To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>
> Cc: dev@dpdk.org; Alan Dewar <alan.dewar@att.com>;
> alangordondewar@gmail.com
> Subject: Re: [dpdk-dev] [PATCH v2] sched: fix port time rounding error
> 
> Cristian, Jasvinder, please review.
> 
> 25/06/2020 11:59, alangordondewar@gmail.com:
> > From: Alan Dewar <alan.dewar@att.com>
> >
> > The QoS scheduler works off port time that is computed from the number
> > of CPU cycles that have elapsed since the last time the port was
> > polled.   It divides the number of elapsed cycles to calculate how
> > many bytes can be sent, however this division can generate rounding
> > errors, where some fraction of a byte sent may be lost.
> >
> > Lose enough of these fractional bytes and the QoS scheduler
> > underperforms.  The problem is worse with low bandwidths.
> >
> > To compensate for this rounding error this fix doesn't advance the
> > port's time_cpu_cycles by the number of cycles that have elapsed, but
> > by multiplying the computed number of bytes that can be sent (which
> > has been rounded down) by number of cycles per byte.
> > This will mean that port's time_cpu_cycles will lag behind the CPU
> > cycles momentarily.  At the next poll, the lag will be taken into
> > account.
> >
> > v2:
> > If the cycles value wraps (100 year+) reset the port's cpu cycle back
> > to zero.
> >
> > Fixes: de3cfa2c98 ("sched: initial import")
> >
> > Signed-off-by: Alan Dewar <alan.dewar@att.com>
> > ---
> >  lib/librte_sched/rte_sched.c | 11 +++++++++--
> >  1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_sched/rte_sched.c
> > b/lib/librte_sched/rte_sched.c index c0983ddda..7c022cd61 100644
> > --- a/lib/librte_sched/rte_sched.c
> > +++ b/lib/librte_sched/rte_sched.c
> > @@ -222,6 +222,7 @@ struct rte_sched_port {
> >  	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes
> */
> >  	uint64_t time;                /* Current NIC TX time measured in bytes */
> >  	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
> > +	uint64_t cycles_per_byte;
> >
> >  	/* Grinders */
> >  	struct rte_mbuf **pkts_out;
> > @@ -852,6 +853,7 @@ rte_sched_port_config(struct
> rte_sched_port_params *params)
> >  	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
> >  		/ params->rate;
> >  	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
> > +	port->cycles_per_byte = cycles_per_byte;
> >
> >  	/* Grinders */
> >  	port->pkts_out = NULL;
> > @@ -2673,16 +2675,21 @@ static inline void
> > rte_sched_port_time_resync(struct rte_sched_port *port)  {
> >  	uint64_t cycles = rte_get_tsc_cycles();
> > -	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
> > +	uint64_t cycles_diff;
> >  	uint64_t bytes_diff;
> >  	uint32_t i;
> >
> > +	if (cycles < port->time_cpu_cycles)
> > +		port->time_cpu_cycles = 0;
> > +
> > +	cycles_diff = cycles - port->time_cpu_cycles;
> >  	/* Compute elapsed time in bytes */
> >  	bytes_diff = rte_reciprocal_divide(cycles_diff <<
> RTE_SCHED_TIME_SHIFT,
> >  					   port->inv_cycles_per_byte);
> >
> >  	/* Advance port time */
> > -	port->time_cpu_cycles = cycles;
> > +	port->time_cpu_cycles +=
> > +		(bytes_diff * port->cycles_per_byte) >>
> RTE_SCHED_TIME_SHIFT;
> >  	port->time_cpu_bytes += bytes_diff;
> >  	if (port->time < port->time_cpu_bytes)
> >  		port->time = port->time_cpu_bytes;
> >
> 

Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
  
Thomas Monjalon July 6, 2020, 11:01 p.m. UTC | #3
> > 25/06/2020 11:59, alangordondewar@gmail.com:
> > > From: Alan Dewar <alan.dewar@att.com>
> > >
> > > The QoS scheduler works off port time that is computed from the number
> > > of CPU cycles that have elapsed since the last time the port was
> > > polled.   It divides the number of elapsed cycles to calculate how
> > > many bytes can be sent, however this division can generate rounding
> > > errors, where some fraction of a byte sent may be lost.
> > >
> > > Lose enough of these fractional bytes and the QoS scheduler
> > > underperforms.  The problem is worse with low bandwidths.
> > >
> > > To compensate for this rounding error this fix doesn't advance the
> > > port's time_cpu_cycles by the number of cycles that have elapsed, but
> > > by multiplying the computed number of bytes that can be sent (which
> > > has been rounded down) by number of cycles per byte.
> > > This will mean that port's time_cpu_cycles will lag behind the CPU
> > > cycles momentarily.  At the next poll, the lag will be taken into
> > > account.
> > >
> > > v2:
> > > If the cycles value wraps (100 year+) reset the port's cpu cycle back
> > > to zero.
> > >
> > > Fixes: de3cfa2c98 ("sched: initial import")
> > >
> > > Signed-off-by: Alan Dewar <alan.dewar@att.com>
> 
> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>

Applied, thanks
  
Kevin Traynor Aug. 20, 2020, 2:32 p.m. UTC | #4
Hi,

On 25/06/2020 10:59, alangordondewar@gmail.com wrote:
> From: Alan Dewar <alan.dewar@att.com>
> 
> The QoS scheduler works off port time that is computed from the number
> of CPU cycles that have elapsed since the last time the port was
> polled.   It divides the number of elapsed cycles to calculate how
> many bytes can be sent, however this division can generate rounding
> errors, where some fraction of a byte sent may be lost.
> 
> Lose enough of these fractional bytes and the QoS scheduler
> underperforms.  The problem is worse with low bandwidths.
> 
> To compensate for this rounding error this fix doesn't advance the
> port's time_cpu_cycles by the number of cycles that have elapsed,
> but by multiplying the computed number of bytes that can be sent
> (which has been rounded down) by number of cycles per byte.
> This will mean that port's time_cpu_cycles will lag behind the CPU
> cycles momentarily.  At the next poll, the lag will be taken into
> account.
> 
> v2:
> If the cycles value wraps (100 year+) reset the port's cpu cycle back
> to zero.
> 
> Fixes: de3cfa2c98 ("sched: initial import")
> 
> Signed-off-by: Alan Dewar <alan.dewar@att.com>
> ---
>  lib/librte_sched/rte_sched.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index c0983ddda..7c022cd61 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -222,6 +222,7 @@ struct rte_sched_port {
>  	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes */
>  	uint64_t time;                /* Current NIC TX time measured in bytes */
>  	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
> +	uint64_t cycles_per_byte;
>  

I was backporting this patch to 18.11. The older ABI checker complains
about this structure change.

"cycles_per_byte has been added at the middle position of this
structural type."

Isn't this an ABI break? Dropping from 18.11 for time being.

>  	/* Grinders */
>  	struct rte_mbuf **pkts_out;
> @@ -852,6 +853,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
>  	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
>  		/ params->rate;
>  	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
> +	port->cycles_per_byte = cycles_per_byte;
>  
>  	/* Grinders */
>  	port->pkts_out = NULL;
> @@ -2673,16 +2675,21 @@ static inline void
>  rte_sched_port_time_resync(struct rte_sched_port *port)
>  {
>  	uint64_t cycles = rte_get_tsc_cycles();
> -	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
> +	uint64_t cycles_diff;
>  	uint64_t bytes_diff;
>  	uint32_t i;
>  
> +	if (cycles < port->time_cpu_cycles)
> +		port->time_cpu_cycles = 0;
> +
> +	cycles_diff = cycles - port->time_cpu_cycles;
>  	/* Compute elapsed time in bytes */
>  	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
>  					   port->inv_cycles_per_byte);
>  
>  	/* Advance port time */
> -	port->time_cpu_cycles = cycles;
> +	port->time_cpu_cycles +=
> +		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
>  	port->time_cpu_bytes += bytes_diff;
>  	if (port->time < port->time_cpu_bytes)
>  		port->time = port->time_cpu_bytes;
>
  
Ray Kinsella Aug. 21, 2020, 3:28 p.m. UTC | #5
On 20/08/2020 15:32, Kevin Traynor wrote:
> Hi,
> 
> On 25/06/2020 10:59, alangordondewar@gmail.com wrote:
>> From: Alan Dewar <alan.dewar@att.com>
>>
>> The QoS scheduler works off port time that is computed from the number
>> of CPU cycles that have elapsed since the last time the port was
>> polled.   It divides the number of elapsed cycles to calculate how
>> many bytes can be sent, however this division can generate rounding
>> errors, where some fraction of a byte sent may be lost.
>>
>> Lose enough of these fractional bytes and the QoS scheduler
>> underperforms.  The problem is worse with low bandwidths.
>>
>> To compensate for this rounding error this fix doesn't advance the
>> port's time_cpu_cycles by the number of cycles that have elapsed,
>> but by multiplying the computed number of bytes that can be sent
>> (which has been rounded down) by number of cycles per byte.
>> This will mean that port's time_cpu_cycles will lag behind the CPU
>> cycles momentarily.  At the next poll, the lag will be taken into
>> account.
>>
>> v2:
>> If the cycles value wraps (100 year+) reset the port's cpu cycle back
>> to zero.
>>
>> Fixes: de3cfa2c98 ("sched: initial import")
>>
>> Signed-off-by: Alan Dewar <alan.dewar@att.com>
>> ---
>>  lib/librte_sched/rte_sched.c | 11 +++++++++--
>>  1 file changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
>> index c0983ddda..7c022cd61 100644
>> --- a/lib/librte_sched/rte_sched.c
>> +++ b/lib/librte_sched/rte_sched.c
>> @@ -222,6 +222,7 @@ struct rte_sched_port {
>>  	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes */
>>  	uint64_t time;                /* Current NIC TX time measured in bytes */
>>  	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
>> +	uint64_t cycles_per_byte;
>>  
> 
> I was backporting this patch to 18.11. The older ABI checker complains
> about this structure change.
> 
> "cycles_per_byte has been added at the middle position of this
> structural type."
> 
> Isn't this an ABI break? Dropping from 18.11 for time being.

So it looks like rte_sched_port * is an opaque pointers as it's contents are only
known to rte_sched.c and not outside. To everyone else it is an opaque data structure, 
so structural changes to it would not be an ABI breakage.

> 
>>  	/* Grinders */
>>  	struct rte_mbuf **pkts_out;
>> @@ -852,6 +853,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
>>  	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
>>  		/ params->rate;
>>  	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
>> +	port->cycles_per_byte = cycles_per_byte;
>>  
>>  	/* Grinders */
>>  	port->pkts_out = NULL;
>> @@ -2673,16 +2675,21 @@ static inline void
>>  rte_sched_port_time_resync(struct rte_sched_port *port)
>>  {
>>  	uint64_t cycles = rte_get_tsc_cycles();
>> -	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
>> +	uint64_t cycles_diff;
>>  	uint64_t bytes_diff;
>>  	uint32_t i;
>>  
>> +	if (cycles < port->time_cpu_cycles)
>> +		port->time_cpu_cycles = 0;
>> +
>> +	cycles_diff = cycles - port->time_cpu_cycles;
>>  	/* Compute elapsed time in bytes */
>>  	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
>>  					   port->inv_cycles_per_byte);
>>  
>>  	/* Advance port time */
>> -	port->time_cpu_cycles = cycles;
>> +	port->time_cpu_cycles +=
>> +		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
>>  	port->time_cpu_bytes += bytes_diff;
>>  	if (port->time < port->time_cpu_bytes)
>>  		port->time = port->time_cpu_bytes;
>>
>
  
Kevin Traynor Sept. 7, 2020, 10:09 a.m. UTC | #6
On 21/08/2020 16:28, Kinsella, Ray wrote:
> 
> 
> On 20/08/2020 15:32, Kevin Traynor wrote:
>> Hi,
>>
>> On 25/06/2020 10:59, alangordondewar@gmail.com wrote:
>>> From: Alan Dewar <alan.dewar@att.com>
>>>
>>> The QoS scheduler works off port time that is computed from the number
>>> of CPU cycles that have elapsed since the last time the port was
>>> polled.   It divides the number of elapsed cycles to calculate how
>>> many bytes can be sent, however this division can generate rounding
>>> errors, where some fraction of a byte sent may be lost.
>>>
>>> Lose enough of these fractional bytes and the QoS scheduler
>>> underperforms.  The problem is worse with low bandwidths.
>>>
>>> To compensate for this rounding error this fix doesn't advance the
>>> port's time_cpu_cycles by the number of cycles that have elapsed,
>>> but by multiplying the computed number of bytes that can be sent
>>> (which has been rounded down) by number of cycles per byte.
>>> This will mean that port's time_cpu_cycles will lag behind the CPU
>>> cycles momentarily.  At the next poll, the lag will be taken into
>>> account.
>>>
>>> v2:
>>> If the cycles value wraps (100 year+) reset the port's cpu cycle back
>>> to zero.
>>>
>>> Fixes: de3cfa2c98 ("sched: initial import")
>>>
>>> Signed-off-by: Alan Dewar <alan.dewar@att.com>
>>> ---
>>>  lib/librte_sched/rte_sched.c | 11 +++++++++--
>>>  1 file changed, 9 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
>>> index c0983ddda..7c022cd61 100644
>>> --- a/lib/librte_sched/rte_sched.c
>>> +++ b/lib/librte_sched/rte_sched.c
>>> @@ -222,6 +222,7 @@ struct rte_sched_port {
>>>  	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes */
>>>  	uint64_t time;                /* Current NIC TX time measured in bytes */
>>>  	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
>>> +	uint64_t cycles_per_byte;
>>>  
>>
>> I was backporting this patch to 18.11. The older ABI checker complains
>> about this structure change.
>>
>> "cycles_per_byte has been added at the middle position of this
>> structural type."
>>
>> Isn't this an ABI break? Dropping from 18.11 for time being.
> 
> So it looks like rte_sched_port * is an opaque pointers as it's contents are only
> known to rte_sched.c and not outside. To everyone else it is an opaque data structure, 
> so structural changes to it would not be an ABI breakage.
> 

Thanks Ray, makes sense. I've included the fix in 18.11.10-rc1.

Kevin.

>>
>>>  	/* Grinders */
>>>  	struct rte_mbuf **pkts_out;
>>> @@ -852,6 +853,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
>>>  	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
>>>  		/ params->rate;
>>>  	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
>>> +	port->cycles_per_byte = cycles_per_byte;
>>>  
>>>  	/* Grinders */
>>>  	port->pkts_out = NULL;
>>> @@ -2673,16 +2675,21 @@ static inline void
>>>  rte_sched_port_time_resync(struct rte_sched_port *port)
>>>  {
>>>  	uint64_t cycles = rte_get_tsc_cycles();
>>> -	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
>>> +	uint64_t cycles_diff;
>>>  	uint64_t bytes_diff;
>>>  	uint32_t i;
>>>  
>>> +	if (cycles < port->time_cpu_cycles)
>>> +		port->time_cpu_cycles = 0;
>>> +
>>> +	cycles_diff = cycles - port->time_cpu_cycles;
>>>  	/* Compute elapsed time in bytes */
>>>  	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
>>>  					   port->inv_cycles_per_byte);
>>>  
>>>  	/* Advance port time */
>>> -	port->time_cpu_cycles = cycles;
>>> +	port->time_cpu_cycles +=
>>> +		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
>>>  	port->time_cpu_bytes += bytes_diff;
>>>  	if (port->time < port->time_cpu_bytes)
>>>  		port->time = port->time_cpu_bytes;
>>>
>>
>
  

Patch

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index c0983ddda..7c022cd61 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -222,6 +222,7 @@  struct rte_sched_port {
 	uint64_t time_cpu_bytes;      /* Current CPU time measured in bytes */
 	uint64_t time;                /* Current NIC TX time measured in bytes */
 	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
+	uint64_t cycles_per_byte;
 
 	/* Grinders */
 	struct rte_mbuf **pkts_out;
@@ -852,6 +853,7 @@  rte_sched_port_config(struct rte_sched_port_params *params)
 	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
 		/ params->rate;
 	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
+	port->cycles_per_byte = cycles_per_byte;
 
 	/* Grinders */
 	port->pkts_out = NULL;
@@ -2673,16 +2675,21 @@  static inline void
 rte_sched_port_time_resync(struct rte_sched_port *port)
 {
 	uint64_t cycles = rte_get_tsc_cycles();
-	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
+	uint64_t cycles_diff;
 	uint64_t bytes_diff;
 	uint32_t i;
 
+	if (cycles < port->time_cpu_cycles)
+		port->time_cpu_cycles = 0;
+
+	cycles_diff = cycles - port->time_cpu_cycles;
 	/* Compute elapsed time in bytes */
 	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
 					   port->inv_cycles_per_byte);
 
 	/* Advance port time */
-	port->time_cpu_cycles = cycles;
+	port->time_cpu_cycles +=
+		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
 	port->time_cpu_bytes += bytes_diff;
 	if (port->time < port->time_cpu_bytes)
 		port->time = port->time_cpu_bytes;