diff mbox series

[v16,03/11] eal: change API of power intrinsics

Message ID f32d9217045ba88e93a708c9d07858dfda8c8447.1610473000.git.anatoly.burakov@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers show
Series Add PMD power management | expand

Checks

Context Check Description
ci/checkpatch warning coding style issues

Commit Message

Burakov, Anatoly Jan. 12, 2021, 5:37 p.m. UTC
Instead of passing around pointers and integers, collect everything
into struct. This makes API design around these intrinsics much easier.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---

Notes:
    v16:
    - Add error handling

 drivers/event/dlb/dlb.c                       | 10 ++--
 drivers/event/dlb2/dlb2.c                     | 10 ++--
 lib/librte_eal/arm/rte_power_intrinsics.c     | 20 +++-----
 .../include/generic/rte_power_intrinsics.h    | 50 ++++++++-----------
 lib/librte_eal/ppc/rte_power_intrinsics.c     | 20 +++-----
 lib/librte_eal/x86/rte_power_intrinsics.c     | 42 +++++++++-------
 6 files changed, 70 insertions(+), 82 deletions(-)

Comments

Ananyev, Konstantin Jan. 13, 2021, 1:01 p.m. UTC | #1
> 
> Instead of passing around pointers and integers, collect everything
> into struct. This makes API design around these intrinsics much easier.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> 
> Notes:
>     v16:
>     - Add error handling

There are few trivial checkpatch warnings, please check
Burakov, Anatoly Jan. 13, 2021, 5:22 p.m. UTC | #2
On 13-Jan-21 1:01 PM, Ananyev, Konstantin wrote:
> 
>>
>> Instead of passing around pointers and integers, collect everything
>> into struct. This makes API design around these intrinsics much easier.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>
>> Notes:
>>      v16:
>>      - Add error handling
> 
> There are few trivial checkpatch warnings, please check
> 

To paraphrase Nick Fury, I recognize that checkpatch has produced 
warnings, but given that i don't agree with what checkpatch has to say 
in this case, I've elected to ignore it :)

In particular, these warnings related to comments around struct members, 
which i think i've made to look nice and also took care of correct 
indentation in terms of code looking the same way with different tab 
widths. So, i don't think it should be changed, unless you're suggesting 
to re-layout comments on top of each member, rather than at the side 
(which i think is more readable).
Ananyev, Konstantin Jan. 13, 2021, 6:01 p.m. UTC | #3
> On 13-Jan-21 1:01 PM, Ananyev, Konstantin wrote:
> >
> >>
> >> Instead of passing around pointers and integers, collect everything
> >> into struct. This makes API design around these intrinsics much easier.
> >>
> >> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> >> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >> ---
> >>
> >> Notes:
> >>      v16:
> >>      - Add error handling
> >
> > There are few trivial checkpatch warnings, please check
> >
> 
> To paraphrase Nick Fury, I recognize that checkpatch has produced
> warnings, but given that i don't agree with what checkpatch has to say
> in this case, I've elected to ignore it :)
> 
> In particular, these warnings related to comments around struct members,
> which i think i've made to look nice and also took care of correct
> indentation in terms of code looking the same way with different tab
> widths. So, i don't think it should be changed, unless you're suggesting
> to re-layout comments on top of each member, rather than at the side
> (which i think is more readable).

If top is not an option, it is possible to move comment on next after actual field lines:
	uint32_t x;
	/**<
	 * blah, blah
	 * blah, blah, blah
	 */
AFAIK that would keep checkpatch happy.
Burakov, Anatoly Jan. 14, 2021, 10:23 a.m. UTC | #4
On 13-Jan-21 6:01 PM, Ananyev, Konstantin wrote:
> 
>> On 13-Jan-21 1:01 PM, Ananyev, Konstantin wrote:
>>>
>>>>
>>>> Instead of passing around pointers and integers, collect everything
>>>> into struct. This makes API design around these intrinsics much easier.
>>>>
>>>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>>> ---
>>>>
>>>> Notes:
>>>>       v16:
>>>>       - Add error handling
>>>
>>> There are few trivial checkpatch warnings, please check
>>>
>>
>> To paraphrase Nick Fury, I recognize that checkpatch has produced
>> warnings, but given that i don't agree with what checkpatch has to say
>> in this case, I've elected to ignore it :)
>>
>> In particular, these warnings related to comments around struct members,
>> which i think i've made to look nice and also took care of correct
>> indentation in terms of code looking the same way with different tab
>> widths. So, i don't think it should be changed, unless you're suggesting
>> to re-layout comments on top of each member, rather than at the side
>> (which i think is more readable).
> 
> If top is not an option, it is possible to move comment on next after actual field lines:
> uint32_t x;
> /**<
>   * blah, blah
>   * blah, blah, blah
>   */
> AFAIK that would keep checkpatch happy.
> 

It's not as much "not an option" as it would look less readable to me 
than what there currently is. If we're going to keep comments not on the 
side, then on the top they go. I'd prefer to keep it as is, but if you 
feel strongly about it, i can change it.
Ananyev, Konstantin Jan. 14, 2021, 12:33 p.m. UTC | #5
> 
> On 13-Jan-21 6:01 PM, Ananyev, Konstantin wrote:
> >
> >> On 13-Jan-21 1:01 PM, Ananyev, Konstantin wrote:
> >>>
> >>>>
> >>>> Instead of passing around pointers and integers, collect everything
> >>>> into struct. This makes API design around these intrinsics much easier.
> >>>>
> >>>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> >>>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >>>> ---
> >>>>
> >>>> Notes:
> >>>>       v16:
> >>>>       - Add error handling
> >>>
> >>> There are few trivial checkpatch warnings, please check
> >>>
> >>
> >> To paraphrase Nick Fury, I recognize that checkpatch has produced
> >> warnings, but given that i don't agree with what checkpatch has to say
> >> in this case, I've elected to ignore it :)
> >>
> >> In particular, these warnings related to comments around struct members,
> >> which i think i've made to look nice and also took care of correct
> >> indentation in terms of code looking the same way with different tab
> >> widths. So, i don't think it should be changed, unless you're suggesting
> >> to re-layout comments on top of each member, rather than at the side
> >> (which i think is more readable).
> >
> > If top is not an option, it is possible to move comment on next after actual field lines:
> > uint32_t x;
> > /**<
> >   * blah, blah
> >   * blah, blah, blah
> >   */
> > AFAIK that would keep checkpatch happy.
> >
> 
> It's not as much "not an option" as it would look less readable to me
> than what there currently is. If we're going to keep comments not on the
> side, then on the top they go. I'd prefer to keep it as is, but if you
> feel strongly about it, i can change it.

I don't have any preferences about comments placement.
I just thought it would be good to keep checkpatch happy.
Specially if the changes in question are just cosmetic ones.
diff mbox series

Patch

diff --git a/drivers/event/dlb/dlb.c b/drivers/event/dlb/dlb.c
index 0c95c4793d..d2f2026291 100644
--- a/drivers/event/dlb/dlb.c
+++ b/drivers/event/dlb/dlb.c
@@ -3161,6 +3161,7 @@  dlb_dequeue_wait(struct dlb_eventdev *dlb,
 		/* Interrupts not supported by PF PMD */
 		return 1;
 	} else if (dlb->umwait_allowed) {
+		struct rte_power_monitor_cond pmc;
 		volatile struct dlb_dequeue_qe *cq_base;
 		union {
 			uint64_t raw_qe[2];
@@ -3181,9 +3182,12 @@  dlb_dequeue_wait(struct dlb_eventdev *dlb,
 		else
 			expected_value = 0;
 
-		rte_power_monitor(monitor_addr, expected_value,
-				  qe_mask.raw_qe[1], timeout + start_ticks,
-				  sizeof(uint64_t));
+		pmc.addr = monitor_addr;
+		pmc.val = expected_value;
+		pmc.mask = qe_mask.raw_qe[1];
+		pmc.data_sz = sizeof(uint64_t);
+
+		rte_power_monitor(&pmc, timeout + start_ticks);
 
 		DLB_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 86724863f2..c9a8a02278 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -2870,6 +2870,7 @@  dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 	if (elapsed_ticks >= timeout) {
 		return 1;
 	} else if (dlb2->umwait_allowed) {
+		struct rte_power_monitor_cond pmc;
 		volatile struct dlb2_dequeue_qe *cq_base;
 		union {
 			uint64_t raw_qe[2];
@@ -2890,9 +2891,12 @@  dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 		else
 			expected_value = 0;
 
-		rte_power_monitor(monitor_addr, expected_value,
-				  qe_mask.raw_qe[1], timeout + start_ticks,
-				  sizeof(uint64_t));
+		pmc.addr = monitor_addr;
+		pmc.val = expected_value;
+		pmc.mask = qe_mask.raw_qe[1];
+		pmc.data_sz = sizeof(uint64_t);
+
+		rte_power_monitor(&pmc, timeout + start_ticks);
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
diff --git a/lib/librte_eal/arm/rte_power_intrinsics.c b/lib/librte_eal/arm/rte_power_intrinsics.c
index 7e7552fa8a..5f1caaf25b 100644
--- a/lib/librte_eal/arm/rte_power_intrinsics.c
+++ b/lib/librte_eal/arm/rte_power_intrinsics.c
@@ -8,15 +8,11 @@ 
  * This function is not supported on ARM.
  */
 int
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
+rte_power_monitor(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp)
 {
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
+	RTE_SET_USED(pmc);
 	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
 
 	return -ENOTSUP;
 }
@@ -25,16 +21,12 @@  rte_power_monitor(const volatile void *p, const uint64_t expected_value,
  * This function is not supported on ARM.
  */
 int
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
+rte_power_monitor_sync(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp, rte_spinlock_t *lck)
 {
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
+	RTE_SET_USED(pmc);
 	RTE_SET_USED(tsc_timestamp);
 	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
 
 	return -ENOTSUP;
 }
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index 37e4ec0414..3ad53068d5 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -18,6 +18,18 @@ 
  * which are architecture-dependent.
  */
 
+struct rte_power_monitor_cond {
+	volatile void *addr;  /**< Address to monitor for changes */
+	uint64_t val;         /**< Before attempting the monitoring, the address
+	                       *   may be read and compared against this value.
+	                       **/
+	uint64_t mask;   /**< 64-bit mask to extract current value from addr */
+	uint8_t data_sz; /**< Data size (in bytes) that will be used to compare
+	                  *   expected value with the memory address. Can be 1,
+	                  *   2, 4, or 8. Supplying any other value will lead to
+	                  *   undefined result. */
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
@@ -35,20 +47,11 @@ 
  * @warning It is responsibility of the user to check if this function is
  *   supported at runtime using `rte_cpu_get_intrinsics_support()` API call.
  *
- * @param p
- *   Address to monitor for changes.
- * @param expected_value
- *   Before attempting the monitoring, the `p` address may be read and compared
- *   against this value. If `value_mask` is zero, this step will be skipped.
- * @param value_mask
- *   The 64-bit mask to use to extract current value from `p`.
+ * @param pmc
+ *   The monitoring condition structure.
  * @param tsc_timestamp
  *   Maximum TSC timestamp to wait for. Note that the wait behavior is
  *   architecture-dependent.
- * @param data_sz
- *   Data size (in bytes) that will be used to compare expected value with the
- *   memory address. Can be 1, 2, 4 or 8. Supplying any other value will lead
- *   to undefined result.
  *
  * @return
  *   0 on success
@@ -56,10 +59,8 @@ 
  *   -ENOTSUP if unsupported
  */
 __rte_experimental
-int rte_power_monitor(const volatile void *p,
-		const uint64_t expected_value, const uint64_t value_mask,
-		const uint64_t tsc_timestamp, const uint8_t data_sz);
-
+int rte_power_monitor(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp);
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
@@ -80,20 +81,11 @@  int rte_power_monitor(const volatile void *p,
  * @warning It is responsibility of the user to check if this function is
  *   supported at runtime using `rte_cpu_get_intrinsics_support()` API call.
  *
- * @param p
- *   Address to monitor for changes.
- * @param expected_value
- *   Before attempting the monitoring, the `p` address may be read and compared
- *   against this value. If `value_mask` is zero, this step will be skipped.
- * @param value_mask
- *   The 64-bit mask to use to extract current value from `p`.
+ * @param pmc
+ *   The monitoring condition structure.
  * @param tsc_timestamp
  *   Maximum TSC timestamp to wait for. Note that the wait behavior is
  *   architecture-dependent.
- * @param data_sz
- *   Data size (in bytes) that will be used to compare expected value with the
- *   memory address. Can be 1, 2, 4 or 8. Supplying any other value will lead
- *   to undefined result.
  * @param lck
  *   A spinlock that must be locked before entering the function, will be
  *   unlocked while the CPU is sleeping, and will be locked again once the CPU
@@ -105,10 +97,8 @@  int rte_power_monitor(const volatile void *p,
  *   -ENOTSUP if unsupported
  */
 __rte_experimental
-int rte_power_monitor_sync(const volatile void *p,
-		const uint64_t expected_value, const uint64_t value_mask,
-		const uint64_t tsc_timestamp, const uint8_t data_sz,
-		rte_spinlock_t *lck);
+int rte_power_monitor_sync(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp, rte_spinlock_t *lck);
 
 /**
  * @warning
diff --git a/lib/librte_eal/ppc/rte_power_intrinsics.c b/lib/librte_eal/ppc/rte_power_intrinsics.c
index 929e0611b0..5e5a1fff5a 100644
--- a/lib/librte_eal/ppc/rte_power_intrinsics.c
+++ b/lib/librte_eal/ppc/rte_power_intrinsics.c
@@ -8,15 +8,11 @@ 
  * This function is not supported on PPC64.
  */
 int
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
+rte_power_monitor(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp)
 {
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
+	RTE_SET_USED(pmc);
 	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
 
 	return -ENOTSUP;
 }
@@ -25,16 +21,12 @@  rte_power_monitor(const volatile void *p, const uint64_t expected_value,
  * This function is not supported on PPC64.
  */
 int
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
+rte_power_monitor_sync(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp, rte_spinlock_t *lck)
 {
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
+	RTE_SET_USED(pmc);
 	RTE_SET_USED(tsc_timestamp);
 	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
 
 	return -ENOTSUP;
 }
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
index 2a38440bec..6be5c8b9f1 100644
--- a/lib/librte_eal/x86/rte_power_intrinsics.c
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -46,9 +46,8 @@  __check_val_size(const uint8_t sz)
  * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
  */
 int
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
+rte_power_monitor(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp)
 {
 	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
 	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
@@ -57,7 +56,10 @@  rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 	if (!wait_supported)
 		return -ENOTSUP;
 
-	if (__check_val_size(data_sz) < 0)
+	if (pmc == NULL)
+		return -EINVAL;
+
+	if (__check_val_size(pmc->data_sz) < 0)
 		return -EINVAL;
 
 	/*
@@ -68,14 +70,15 @@  rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 	/* set address for UMONITOR */
 	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
 			:
-			: "D"(p));
+			: "D"(pmc->addr));
 
-	if (value_mask) {
-		const uint64_t cur_value = __get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
+	if (pmc->mask) {
+		const uint64_t cur_value = __get_umwait_val(
+				pmc->addr, pmc->data_sz);
+		const uint64_t masked = cur_value & pmc->mask;
 
 		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
+		if (masked == pmc->val)
 			return 0;
 	}
 	/* execute UMWAIT */
@@ -93,9 +96,8 @@  rte_power_monitor(const volatile void *p, const uint64_t expected_value,
  * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
  */
 int
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
+rte_power_monitor_sync(const struct rte_power_monitor_cond *pmc,
+		const uint64_t tsc_timestamp, rte_spinlock_t *lck)
 {
 	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
 	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
@@ -104,7 +106,10 @@  rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 	if (!wait_supported)
 		return -ENOTSUP;
 
-	if (__check_val_size(data_sz) < 0)
+	if (pmc == NULL || lck == NULL)
+		return -EINVAL;
+
+	if (__check_val_size(pmc->data_sz) < 0)
 		return -EINVAL;
 
 	/*
@@ -115,14 +120,15 @@  rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 	/* set address for UMONITOR */
 	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
 			:
-			: "D"(p));
+			: "D"(pmc->addr));
 
-	if (value_mask) {
-		const uint64_t cur_value = __get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
+	if (pmc->mask) {
+		const uint64_t cur_value = __get_umwait_val(
+				pmc->addr, pmc->data_sz);
+		const uint64_t masked = cur_value & pmc->mask;
 
 		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
+		if (masked == pmc->val)
 			return 0;
 	}
 	rte_spinlock_unlock(lck);