Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/70118/?format=api
https://patches.dpdk.org/api/patches/70118/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/20200512140100.26803-6-hemant.agrawal@nxp.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20200512140100.26803-6-hemant.agrawal@nxp.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20200512140100.26803-6-hemant.agrawal@nxp.com", "date": "2020-05-12T14:00:54", "name": "[v2,06/12] bus/dpaa: move internal symbols into INTERNAL section", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "b3df1c5412ad9073ab31b6a5347742358d3400f5", "submitter": { "id": 477, "url": "https://patches.dpdk.org/api/people/477/?format=api", "name": "Hemant Agrawal", "email": "hemant.agrawal@nxp.com" }, "delegate": null, "mbox": "https://patches.dpdk.org/project/dpdk/patch/20200512140100.26803-6-hemant.agrawal@nxp.com/mbox/", "series": [ { "id": 10002, "url": "https://patches.dpdk.org/api/series/10002/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=10002", "date": "2020-05-12T14:00:49", "name": "[v2,01/12] common/dpaax: move internal symbols into INTERNAL section", "version": 2, "mbox": "https://patches.dpdk.org/series/10002/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/70118/comments/", "check": "fail", "checks": "https://patches.dpdk.org/api/patches/70118/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 49B8FA04A2;\n\tTue, 12 May 2020 16:04:41 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id E216E1C18E;\n\tTue, 12 May 2020 16:03:35 +0200 (CEST)", "from inva021.nxp.com (inva021.nxp.com [92.121.34.21])\n by dpdk.org (Postfix) with ESMTP id 7E2EE1BFBF\n for <dev@dpdk.org>; Tue, 12 May 2020 16:03:27 +0200 (CEST)", "from inva021.nxp.com (localhost [127.0.0.1])\n by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 6072F201021;\n Tue, 12 May 2020 16:03:27 +0200 (CEST)", "from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com\n [165.114.16.14])\n by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 59C3F201020;\n Tue, 12 May 2020 16:03:25 +0200 (CEST)", "from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net\n [10.232.133.63])\n by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 4220F402B7;\n Tue, 12 May 2020 22:03:22 +0800 (SGT)" ], "From": "Hemant Agrawal <hemant.agrawal@nxp.com>", "To": "dev@dpdk.org,\n\tdavid.marchand@redhat.com", "Cc": "Hemant Agrawal <hemant.agrawal@nxp.com>", "Date": "Tue, 12 May 2020 19:30:54 +0530", "Message-Id": "<20200512140100.26803-6-hemant.agrawal@nxp.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20200512140100.26803-1-hemant.agrawal@nxp.com>", "References": "<20200505140832.646-1-hemant.agrawal@nxp.com>\n <20200512140100.26803-1-hemant.agrawal@nxp.com>", "X-Virus-Scanned": "ClamAV using ClamSMTP", "Subject": "[dpdk-dev] [PATCH v2 06/12] bus/dpaa: move internal symbols into\n\tINTERNAL section", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch moves the internal symbols to INTERNAL sections\nso that any change in them is not reported as ABI breakage.\n\nSigned-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>\n---\n drivers/bus/dpaa/include/fsl_bman.h | 6 +++++\n drivers/bus/dpaa/include/fsl_fman.h | 27 +++++++++++++++++++\n drivers/bus/dpaa/include/fsl_qman.h | 32 +++++++++++++++++++++++\n drivers/bus/dpaa/include/fsl_usd.h | 6 +++++\n drivers/bus/dpaa/include/netcfg.h | 2 ++\n drivers/bus/dpaa/rte_bus_dpaa_version.map | 7 +----\n drivers/bus/dpaa/rte_dpaa_bus.h | 5 ++++\n 7 files changed, 79 insertions(+), 6 deletions(-)", "diff": "diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h\nindex f9cd972153..82da2fcfe0 100644\n--- a/drivers/bus/dpaa/include/fsl_bman.h\n+++ b/drivers/bus/dpaa/include/fsl_bman.h\n@@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);\n * the structure provided by the caller can be released or reused after the\n * function returns.\n */\n+__rte_internal\n struct bman_pool *bman_new_pool(const struct bman_pool_params *params);\n \n /**\n * bman_free_pool - Deallocates a Buffer Pool object\n * @pool: the pool object to release\n */\n+__rte_internal\n void bman_free_pool(struct bman_pool *pool);\n \n /**\n@@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);\n * The returned pointer refers to state within the pool object so must not be\n * modified and can no longer be read once the pool object is destroyed.\n */\n+__rte_internal\n const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);\n \n /**\n@@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);\n * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options\n *\n */\n+__rte_internal\n int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,\n \t\t u32 flags);\n \n@@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,\n * The return value will be the number of buffers obtained from the pool, or a\n * negative error code if a h/w error or pool starvation was encountered.\n */\n+__rte_internal\n int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,\n \t\t u32 flags);\n \n@@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);\n *\n * Return the number of the free buffers\n */\n+__rte_internal\n u32 bman_query_free_buffers(struct bman_pool *pool);\n \n /**\ndiff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h\nindex 5705ebfdce..6c87c8db0d 100644\n--- a/drivers/bus/dpaa/include/fsl_fman.h\n+++ b/drivers/bus/dpaa/include/fsl_fman.h\n@@ -7,6 +7,8 @@\n #ifndef __FSL_FMAN_H\n #define __FSL_FMAN_H\n \n+#include <rte_compat.h>\n+\n #ifdef __cplusplus\n extern \"C\" {\n #endif\n@@ -43,18 +45,23 @@ struct fm_status_t {\n } __rte_packed;\n \n /* Set MAC address for a particular interface */\n+__rte_internal\n int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);\n \n /* Remove a MAC address for a particular interface */\n+__rte_internal\n void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);\n \n /* Get the FMAN statistics */\n+__rte_internal\n void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);\n \n /* Reset the FMAN statistics */\n+__rte_internal\n void fman_if_stats_reset(struct fman_if *p);\n \n /* Get all of the FMAN statistics */\n+__rte_internal\n void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);\n \n /* Set ignore pause option for a specific interface */\n@@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);\n void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);\n \n /* Enable/disable Rx promiscuous mode on specified interface */\n+__rte_internal\n void fman_if_promiscuous_enable(struct fman_if *p);\n+__rte_internal\n void fman_if_promiscuous_disable(struct fman_if *p);\n \n /* Enable/disable Rx on specific interfaces */\n+__rte_internal\n void fman_if_enable_rx(struct fman_if *p);\n+__rte_internal\n void fman_if_disable_rx(struct fman_if *p);\n \n /* Enable/disable loopback on specific interfaces */\n+__rte_internal\n void fman_if_loopback_enable(struct fman_if *p);\n+__rte_internal\n void fman_if_loopback_disable(struct fman_if *p);\n \n /* Set buffer pool on specific interface */\n+__rte_internal\n void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,\n \t\t size_t bufsize);\n \n /* Get Flow Control threshold parameters on specific interface */\n+__rte_internal\n int fman_if_get_fc_threshold(struct fman_if *fm_if);\n \n /* Enable and Set Flow Control threshold parameters on specific interface */\n+__rte_internal\n int fman_if_set_fc_threshold(struct fman_if *fm_if,\n \t\t\tu32 high_water, u32 low_water, u32 bpid);\n \n /* Get Flow Control pause quanta on specific interface */\n+__rte_internal\n int fman_if_get_fc_quanta(struct fman_if *fm_if);\n \n /* Set Flow Control pause quanta on specific interface */\n+__rte_internal\n int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);\n \n /* Set default error fqid on specific interface */\n@@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);\n int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);\n \n /* Set IC transfer params */\n+__rte_internal\n int fman_if_set_ic_params(struct fman_if *fm_if,\n \t\t\t const struct fman_if_ic_params *icp);\n \n /* Get interface fd->offset value */\n+__rte_internal\n int fman_if_get_fdoff(struct fman_if *fm_if);\n \n /* Set interface fd->offset value */\n+__rte_internal\n void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);\n \n /* Get interface SG enable status value */\n+__rte_internal\n int fman_if_get_sg_enable(struct fman_if *fm_if);\n \n /* Set interface SG support mode */\n+__rte_internal\n void fman_if_set_sg(struct fman_if *fm_if, int enable);\n \n /* Get interface Max Frame length (MTU) */\n uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);\n \n /* Set interface Max Frame length (MTU) */\n+__rte_internal\n void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);\n \n /* Set interface next invoked action for dequeue operation */\n void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);\n \n /* discard error packets on rx */\n+__rte_internal\n void fman_if_discard_rx_errors(struct fman_if *fm_if);\n \n+__rte_internal\n void fman_if_set_mcast_filter_table(struct fman_if *p);\n \n+__rte_internal\n void fman_if_reset_mcast_filter_table(struct fman_if *p);\n \n int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);\ndiff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h\nindex 1b3342e7e6..4411bb0a79 100644\n--- a/drivers/bus/dpaa/include/fsl_qman.h\n+++ b/drivers/bus/dpaa/include/fsl_qman.h\n@@ -1314,6 +1314,7 @@ struct qman_cgr {\n #define QMAN_CGR_MODE_FRAME 0x00000001\n \n #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP\n+__rte_internal\n void qman_set_fq_lookup_table(void **table);\n #endif\n \n@@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);\n */\n int qman_get_portal_index(void);\n \n+__rte_internal\n u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,\n \t\t\tvoid **bufs);\n \n@@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,\n * processed via qman_poll_***() functions). Returns zero for success, or\n * -EINVAL if the current CPU is sharing a portal hosted on another CPU.\n */\n+__rte_internal\n int qman_irqsource_add(u32 bits);\n \n /**\n@@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);\n * takes portal (fq specific) as input rather than using the thread affined\n * portal.\n */\n+__rte_internal\n int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);\n \n /**\n@@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);\n * instead be processed via qman_poll_***() functions. Returns zero for success,\n * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.\n */\n+__rte_internal\n int qman_irqsource_remove(u32 bits);\n \n /**\n@@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);\n * takes portal (fq specific) as input rather than using the thread affined\n * portal.\n */\n+__rte_internal\n int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);\n \n /**\n@@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);\n */\n u16 qman_affine_channel(int cpu);\n \n+__rte_internal\n unsigned int qman_portal_poll_rx(unsigned int poll_limit,\n \t\t\t\t void **bufs, struct qman_portal *q);\n \n@@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,\n *\n * This function will issue a volatile dequeue command to the QMAN.\n */\n+__rte_internal\n int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);\n \n /**\n@@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);\n * is issued. It will keep returning NULL until there is no packet available on\n * the DQRR.\n */\n+__rte_internal\n struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);\n \n /**\n@@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);\n * This will consume the DQRR enrey and make it available for next volatile\n * dequeue.\n */\n+__rte_internal\n void qman_dqrr_consume(struct qman_fq *fq,\n \t\t struct qm_dqrr_entry *dq);\n \n@@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,\n * this function will return -EINVAL, otherwise the return value is >=0 and\n * represents the number of DQRR entries processed.\n */\n+__rte_internal\n int qman_poll_dqrr(unsigned int limit);\n \n /**\n@@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);\n * (SDQCR). The requested pools are limited to those the portal has dequeue\n * access to.\n */\n+__rte_internal\n void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);\n \n /**\n@@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);\n * function must be called from the same CPU as that which processed the DQRR\n * entry in the first place.\n */\n+__rte_internal\n void qman_dca_index(u8 index, int park_request);\n \n /**\n@@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);\n * a frame queue object based on that, rather than assuming/requiring that it be\n * Out of Service.\n */\n+__rte_internal\n int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);\n \n /**\n@@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);\n * qman_fq_fqid - Queries the frame queue ID of a FQ object\n * @fq: the frame queue object to query\n */\n+__rte_internal\n u32 qman_fq_fqid(struct qman_fq *fq);\n \n /**\n@@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);\n * This captures the state, as seen by the driver, at the time the function\n * executes.\n */\n+__rte_internal\n void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);\n \n /**\n@@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);\n * context_a.address fields and will leave the stashing fields provided by the\n * user alone, otherwise it will zero out the context_a.stashing fields.\n */\n+__rte_internal\n int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);\n \n /**\n@@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);\n * caller should be prepared to accept the callback as the function is called,\n * not only once it has returned.\n */\n+__rte_internal\n int qman_retire_fq(struct qman_fq *fq, u32 *flags);\n \n /**\n@@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);\n * The frame queue must be retired and empty, and if any order restoration list\n * was released as ERNs at the time of retirement, they must all be consumed.\n */\n+__rte_internal\n int qman_oos_fq(struct qman_fq *fq);\n \n /**\n@@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);\n * @fq: the frame queue object to be queried\n * @np: storage for the queried FQD fields\n */\n+__rte_internal\n int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);\n \n /**\n@@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);\n * @fq: the frame queue object to be queried\n * @frm_cnt: number of frames in the queue\n */\n+__rte_internal\n int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);\n \n /**\n@@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);\n * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the\n * \"flags\" retrieved from qman_fq_state().\n */\n+__rte_internal\n int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);\n \n /**\n@@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);\n * of an already busy hardware resource by throttling many of the to-be-dropped\n * enqueues \"at the source\".\n */\n+__rte_internal\n int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);\n \n+__rte_internal\n int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,\n \t\t int frames_to_send);\n \n@@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,\n * This API is similar to qman_enqueue_multi(), but it takes fd which needs\n * to be processed by different frame queues.\n */\n+__rte_internal\n int\n qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,\n \t\t u32 *flags, int frames_to_send);\n@@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);\n * @fqid: the base FQID of the range to deallocate\n * @count: the number of FQIDs in the range\n */\n+__rte_internal\n int qman_reserve_fqid_range(u32 fqid, unsigned int count);\n static inline int qman_reserve_fqid(u32 fqid)\n {\n@@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)\n * than requested (though alignment will be as requested). If @partial is zero,\n * the return value will either be 'count' or negative.\n */\n+__rte_internal\n int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);\n static inline int qman_alloc_pool(u32 *result)\n {\n@@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);\n * any unspecified parameters) will be used rather than a modify hw hardware\n * (which only modifies the specified parameters).\n */\n+__rte_internal\n int qman_create_cgr(struct qman_cgr *cgr, u32 flags,\n \t\t struct qm_mcc_initcgr *opts);\n \n@@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,\n * is executed. This must be excuted on the same affine portal on which it was\n * created.\n */\n+__rte_internal\n int qman_delete_cgr(struct qman_cgr *cgr);\n \n /**\n@@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);\n * unspecified parameters) will be used rather than a modify hw hardware (which\n * only modifies the specified parameters).\n */\n+__rte_internal\n int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,\n \t\t struct qm_mcc_initcgr *opts);\n \n@@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);\n * than requested (though alignment will be as requested). If @partial is zero,\n * the return value will either be 'count' or negative.\n */\n+__rte_internal\n int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);\n static inline int qman_alloc_cgrid(u32 *result)\n {\n@@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)\n * @id: the base CGR ID of the range to deallocate\n * @count: the number of CGR IDs in the range\n */\n+__rte_internal\n void qman_release_cgrid_range(u32 id, unsigned int count);\n static inline void qman_release_cgrid(u32 id)\n {\ndiff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h\nindex 263d9bb976..30ec63a09d 100644\n--- a/drivers/bus/dpaa/include/fsl_usd.h\n+++ b/drivers/bus/dpaa/include/fsl_usd.h\n@@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);\n int bman_free_raw_portal(struct dpaa_raw_portal *portal);\n \n /* Obtain thread-local UIO file-descriptors */\n+__rte_internal\n int qman_thread_fd(void);\n int bman_thread_fd(void);\n \n@@ -66,8 +67,12 @@ int bman_thread_fd(void);\n * processing is complete. As such, it is essential to call this before going\n * into another blocking read/select/poll.\n */\n+__rte_internal\n void qman_thread_irq(void);\n+\n+__rte_internal\n void bman_thread_irq(void);\n+__rte_internal\n void qman_fq_portal_thread_irq(struct qman_portal *qp);\n \n void qman_clear_irq(void);\n@@ -77,6 +82,7 @@ int qman_global_init(void);\n int bman_global_init(void);\n \n /* Direct portal create and destroy */\n+__rte_internal\n struct qman_portal *fsl_qman_fq_portal_create(int *fd);\n int fsl_qman_fq_portal_destroy(struct qman_portal *qp);\n int fsl_qman_fq_portal_init(struct qman_portal *qp);\ndiff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h\nindex bf7bfae8cb..d7d1befd24 100644\n--- a/drivers/bus/dpaa/include/netcfg.h\n+++ b/drivers/bus/dpaa/include/netcfg.h\n@@ -46,11 +46,13 @@ struct netcfg_interface {\n * cfg_file: FMC config XML file\n * Returns the configuration information in newly allocated memory.\n */\n+__rte_internal\n struct netcfg_info *netcfg_acquire(void);\n \n /* cfg_ptr: configuration information pointer.\n * Frees the resources allocated by the configuration layer.\n */\n+__rte_internal\n void netcfg_release(struct netcfg_info *cfg_ptr);\n \n #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER\ndiff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map\nindex e6ca4361e0..f4947fac41 100644\n--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map\n+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map\n@@ -1,4 +1,4 @@\n-DPDK_20.0 {\n+INTERNAL {\n \tglobal:\n \n \tbman_acquire;\n@@ -13,7 +13,6 @@ DPDK_20.0 {\n \tdpaa_logtype_pmd;\n \tdpaa_netcfg;\n \tdpaa_svr_family;\n-\tfman_ccsr_map_fd;\n \tfman_dealloc_bufs_mask_hi;\n \tfman_dealloc_bufs_mask_lo;\n \tfman_if_add_mac_addr;\n@@ -51,7 +50,6 @@ DPDK_20.0 {\n \tqm_channel_pool1;\n \tqman_alloc_cgrid_range;\n \tqman_alloc_pool_range;\n-\tqman_clear_irq;\n \tqman_create_cgr;\n \tqman_create_fq;\n \tqman_dca_index;\n@@ -87,10 +85,7 @@ DPDK_20.0 {\n \tqman_volatile_dequeue;\n \trte_dpaa_driver_register;\n \trte_dpaa_driver_unregister;\n-\trte_dpaa_mem_ptov;\n \trte_dpaa_portal_fq_close;\n \trte_dpaa_portal_fq_init;\n \trte_dpaa_portal_init;\n-\n-\tlocal: *;\n };\ndiff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h\nindex 373aca9785..d4aee132ef 100644\n--- a/drivers/bus/dpaa/rte_dpaa_bus.h\n+++ b/drivers/bus/dpaa/rte_dpaa_bus.h\n@@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)\n * A pointer to a rte_dpaa_driver structure describing the driver\n * to be registered.\n */\n+__rte_internal\n void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);\n \n /**\n@@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);\n *\tA pointer to a rte_dpaa_driver structure describing the driver\n *\tto be unregistered.\n */\n+__rte_internal\n void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);\n \n /**\n@@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);\n * @return\n *\t0 in case of success, error otherwise\n */\n+__rte_internal\n int rte_dpaa_portal_init(void *arg);\n \n+__rte_internal\n int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);\n \n+__rte_internal\n int rte_dpaa_portal_fq_close(struct qman_fq *fq);\n \n /**\n", "prefixes": [ "v2", "06/12" ] }{ "id": 70118, "url": "