Including fixes from netfilter and bpf.

Current release - regressions:
 
  - sched: fix SKB_NOT_DROPPED_YET splat under debug config
 
 Current release - new code bugs:
 
  - tcp: fix usec timestamps with TCP fastopen
 
  - tcp_sigpool: fix some off by one bugs
 
  - tcp: fix possible out-of-bounds reads in tcp_hash_fail()
 
  - tcp: fix SYN option room calculation for TCP-AO
 
  - bpf: fix compilation error without CGROUPS
 
  - ptp:
    - ptp_read() should not release queue
    - fix tsevqs corruption
 
 Previous releases - regressions:
 
  - llc: verify mac len before reading mac header
 
 Previous releases - always broken:
 
  - bpf:
    - fix check_stack_write_fixed_off() to correctly spill imm
    - fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END
    - check map->usercnt after timer->timer is assigned
 
  - dsa: lan9303: consequently nested-lock physical MDIO
 
  - dccp/tcp: call security_inet_conn_request() after setting IP addr
 
  - tg3: fix the TX ring stall due to incorrect full ring handling
 
  - phylink: initialize carrier state at creation
 
  - ice: fix direction of VF rules in switchdev mode
 
 Misc:
 
  - fill in a bunch of missing MODULE_DESCRIPTION()s, more to come
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmVNRnAACgkQMUZtbf5S
 IrsYaA/+IUoYi96/oLtvvrET6HIbXeMaLKef0UlytEicQKy8h5EWlhcTZPhQEY0g
 dtaKOemQsO0dQTma4eQBiBDHeCeSkitgD9p7fh0i+//QFYWSFqHrBiF2mlToc/ZQ
 T1p4BlVL7D2Xsr1Lki93zk+EhFGEy2KroYgrWbZc9TWE5ap9PtSVF9eqeHAVCmZ7
 ocre/eo4pqUM9rAHIAyhoL+0xtVQ59dBevbJC0qYcmflhafr82Gtdveo6pBBKuYm
 GhwbRrAXER3Neav9c6NHqat4zsMwGpC27SiN9dYWm6dlkeS9U9t2PUu71OkJGVfw
 VaSE+utkC/WmzGbuiUIjqQLBrRe372ItHCr78BfSRMshS+RBTHtoK7njeH8Iv67E
 RsMeCyVNj9dtGlOQG5JAv8IoCQ1WbMw9B36Yzw3ip/MmDX/ntXz7Dcr4ZMZ6VURS
 CHhHFZPnmMykMXkT6SIlxeAg2r8ELtESzkvLimdTVFPAlk3cPkibKJbh3F/tEqXS
 PDb3y0uoEgRQBAsWXXx9FQEvv9rTL6YrzbMhmJBIIEoNxppQYQ7FZBJX9utAVp5B
 1GdyqhR6IRTaKb9cMRj/K1xPwm2KgCw9xj9pjKdAA7QUMslXbFp8blv1rIkFGshg
 hiNXmPcI8wo0j+0lZYktEcIERL5y6c8BgK2NnPU6RULua96tuQ4=
 =k6Wk
 -----END PGP SIGNATURE-----

Merge tag 'net-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter and bpf.

  Current release - regressions:

   - sched: fix SKB_NOT_DROPPED_YET splat under debug config

  Current release - new code bugs:

   - tcp:
       - fix usec timestamps with TCP fastopen
       - fix possible out-of-bounds reads in tcp_hash_fail()
       - fix SYN option room calculation for TCP-AO

   - tcp_sigpool: fix some off by one bugs

   - bpf: fix compilation error without CGROUPS

   - ptp:
       - ptp_read() should not release queue
       - fix tsevqs corruption

  Previous releases - regressions:

   - llc: verify mac len before reading mac header

  Previous releases - always broken:

   - bpf:
       - fix check_stack_write_fixed_off() to correctly spill imm
       - fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END
       - check map->usercnt after timer->timer is assigned

   - dsa: lan9303: consequently nested-lock physical MDIO

   - dccp/tcp: call security_inet_conn_request() after setting IP addr

   - tg3: fix the TX ring stall due to incorrect full ring handling

   - phylink: initialize carrier state at creation

   - ice: fix direction of VF rules in switchdev mode

  Misc:

   - fill in a bunch of missing MODULE_DESCRIPTION()s, more to come"

* tag 'net-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (84 commits)
  net: ti: icss-iep: fix setting counter value
  ptp: fix corrupted list in ptp_open
  ptp: ptp_read should not release queue
  net_sched: sch_fq: better validate TCA_FQ_WEIGHTS and TCA_FQ_PRIOMAP
  net: kcm: fill in MODULE_DESCRIPTION()
  net/sched: act_ct: Always fill offloading tuple iifidx
  netfilter: nat: fix ipv6 nat redirect with mapped and scoped addresses
  netfilter: xt_recent: fix (increase) ipv6 literal buffer length
  ipvs: add missing module descriptions
  netfilter: nf_tables: remove catchall element in GC sync path
  netfilter: add missing module descriptions
  drivers/net/ppp: use standard array-copy-function
  net: enetc: shorten enetc_setup_xdp_prog() error message to fit NETLINK_MAX_FMTMSG_LEN
  virtio/vsock: Fix uninit-value in virtio_transport_recv_pkt()
  r8169: respect userspace disabling IFF_MULTICAST
  selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
  bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
  net: phylink: initialize carrier state at creation
  test/vsock: add dobule bind connect test
  test/vsock: refactor vsock_accept
  ...
This commit is contained in:
Linus Torvalds 2023-11-09 17:09:35 -08:00
commit 89cdf9d556
178 changed files with 1242 additions and 434 deletions

@ -37,16 +37,14 @@ prototype in a header for the wrapper kfunc.
An example is given below:: An example is given below::
/* Disables missing prototype warnings */ /* Disables missing prototype warnings */
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global kfuncs as their definitions will be in BTF");
__bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) __bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr)
{ {
return find_get_task_by_vpid(nr); return find_get_task_by_vpid(nr);
} }
__diag_pop(); __bpf_kfunc_end_defs();
A wrapper kfunc is often needed when we need to annotate parameters of the A wrapper kfunc is often needed when we need to annotate parameters of the
kfunc. Otherwise one may directly make the kfunc visible to the BPF program by kfunc. Otherwise one may directly make the kfunc visible to the BPF program by

@ -71,6 +71,10 @@ definitions:
name: roce-bit name: roce-bit
- -
name: migratable-bit name: migratable-bit
-
name: ipsec-crypto-bit
-
name: ipsec-packet-bit
- -
type: enum type: enum
name: sb-threshold-type name: sb-threshold-type

@ -44,18 +44,16 @@ smcr_testlink_time - INTEGER
wmem - INTEGER wmem - INTEGER
Initial size of send buffer used by SMC sockets. Initial size of send buffer used by SMC sockets.
The default value inherits from net.ipv4.tcp_wmem[1].
The minimum value is 16KiB and there is no hard limit for max value, but The minimum value is 16KiB and there is no hard limit for max value, but
only allowed 512KiB for SMC-R and 1MiB for SMC-D. only allowed 512KiB for SMC-R and 1MiB for SMC-D.
Default: 16K Default: 64KiB
rmem - INTEGER rmem - INTEGER
Initial size of receive buffer (RMB) used by SMC sockets. Initial size of receive buffer (RMB) used by SMC sockets.
The default value inherits from net.ipv4.tcp_rmem[1].
The minimum value is 16KiB and there is no hard limit for max value, but The minimum value is 16KiB and there is no hard limit for max value, but
only allowed 512KiB for SMC-R and 1MiB for SMC-D. only allowed 512KiB for SMC-R and 1MiB for SMC-D.
Default: 128K Default: 64KiB

@ -32,7 +32,7 @@ static int lan9303_mdio_write(void *ctx, uint32_t reg, uint32_t val)
struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx; struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx;
reg <<= 2; /* reg num to offset */ reg <<= 2; /* reg num to offset */
mutex_lock(&sw_dev->device->bus->mdio_lock); mutex_lock_nested(&sw_dev->device->bus->mdio_lock, MDIO_MUTEX_NESTED);
lan9303_mdio_real_write(sw_dev->device, reg, val & 0xffff); lan9303_mdio_real_write(sw_dev->device, reg, val & 0xffff);
lan9303_mdio_real_write(sw_dev->device, reg + 2, (val >> 16) & 0xffff); lan9303_mdio_real_write(sw_dev->device, reg + 2, (val >> 16) & 0xffff);
mutex_unlock(&sw_dev->device->bus->mdio_lock); mutex_unlock(&sw_dev->device->bus->mdio_lock);
@ -50,7 +50,7 @@ static int lan9303_mdio_read(void *ctx, uint32_t reg, uint32_t *val)
struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx; struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx;
reg <<= 2; /* reg num to offset */ reg <<= 2; /* reg num to offset */
mutex_lock(&sw_dev->device->bus->mdio_lock); mutex_lock_nested(&sw_dev->device->bus->mdio_lock, MDIO_MUTEX_NESTED);
*val = lan9303_mdio_real_read(sw_dev->device, reg); *val = lan9303_mdio_real_read(sw_dev->device, reg);
*val |= (lan9303_mdio_real_read(sw_dev->device, reg + 2) << 16); *val |= (lan9303_mdio_real_read(sw_dev->device, reg + 2) << 16);
mutex_unlock(&sw_dev->device->bus->mdio_lock); mutex_unlock(&sw_dev->device->bus->mdio_lock);

@ -6647,9 +6647,9 @@ static void tg3_tx(struct tg3_napi *tnapi)
tnapi->tx_cons = sw_idx; tnapi->tx_cons = sw_idx;
/* Need to make the tx_cons update visible to tg3_start_xmit() /* Need to make the tx_cons update visible to __tg3_start_xmit()
* before checking for netif_queue_stopped(). Without the * before checking for netif_queue_stopped(). Without the
* memory barrier, there is a small possibility that tg3_start_xmit() * memory barrier, there is a small possibility that __tg3_start_xmit()
* will miss it and cause the queue to be stopped forever. * will miss it and cause the queue to be stopped forever.
*/ */
smp_mb(); smp_mb();
@ -7889,7 +7889,7 @@ static bool tg3_tso_bug_gso_check(struct tg3_napi *tnapi, struct sk_buff *skb)
return skb_shinfo(skb)->gso_segs < tnapi->tx_pending / 3; return skb_shinfo(skb)->gso_segs < tnapi->tx_pending / 3;
} }
static netdev_tx_t tg3_start_xmit(struct sk_buff *, struct net_device *); static netdev_tx_t __tg3_start_xmit(struct sk_buff *, struct net_device *);
/* Use GSO to workaround all TSO packets that meet HW bug conditions /* Use GSO to workaround all TSO packets that meet HW bug conditions
* indicated in tg3_tx_frag_set() * indicated in tg3_tx_frag_set()
@ -7923,7 +7923,7 @@ static int tg3_tso_bug(struct tg3 *tp, struct tg3_napi *tnapi,
skb_list_walk_safe(segs, seg, next) { skb_list_walk_safe(segs, seg, next) {
skb_mark_not_on_list(seg); skb_mark_not_on_list(seg);
tg3_start_xmit(seg, tp->dev); __tg3_start_xmit(seg, tp->dev);
} }
tg3_tso_bug_end: tg3_tso_bug_end:
@ -7933,7 +7933,7 @@ static int tg3_tso_bug(struct tg3 *tp, struct tg3_napi *tnapi,
} }
/* hard_start_xmit for all devices */ /* hard_start_xmit for all devices */
static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) static netdev_tx_t __tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct tg3 *tp = netdev_priv(dev); struct tg3 *tp = netdev_priv(dev);
u32 len, entry, base_flags, mss, vlan = 0; u32 len, entry, base_flags, mss, vlan = 0;
@ -8182,11 +8182,6 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
netif_tx_wake_queue(txq); netif_tx_wake_queue(txq);
} }
if (!netdev_xmit_more() || netif_xmit_stopped(txq)) {
/* Packets are ready, update Tx producer idx on card. */
tw32_tx_mbox(tnapi->prodmbox, entry);
}
return NETDEV_TX_OK; return NETDEV_TX_OK;
dma_error: dma_error:
@ -8199,6 +8194,42 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct netdev_queue *txq;
u16 skb_queue_mapping;
netdev_tx_t ret;
skb_queue_mapping = skb_get_queue_mapping(skb);
txq = netdev_get_tx_queue(dev, skb_queue_mapping);
ret = __tg3_start_xmit(skb, dev);
/* Notify the hardware that packets are ready by updating the TX ring
* tail pointer. We respect netdev_xmit_more() thus avoiding poking
* the hardware for every packet. To guarantee forward progress the TX
* ring must be drained when it is full as indicated by
* netif_xmit_stopped(). This needs to happen even when the current
* skb was dropped or rejected with NETDEV_TX_BUSY. Otherwise packets
* queued by previous __tg3_start_xmit() calls might get stuck in
* the queue forever.
*/
if (!netdev_xmit_more() || netif_xmit_stopped(txq)) {
struct tg3_napi *tnapi;
struct tg3 *tp;
tp = netdev_priv(dev);
tnapi = &tp->napi[skb_queue_mapping];
if (tg3_flag(tp, ENABLE_TSS))
tnapi++;
tw32_tx_mbox(tnapi->prodmbox, tnapi->tx_prod);
}
return ret;
}
static void tg3_mac_loopback(struct tg3 *tp, bool enable) static void tg3_mac_loopback(struct tg3 *tp, bool enable)
{ {
if (enable) { if (enable) {
@ -17729,7 +17760,7 @@ static int tg3_init_one(struct pci_dev *pdev,
* device behind the EPB cannot support DMA addresses > 40-bit. * device behind the EPB cannot support DMA addresses > 40-bit.
* On 64-bit systems with IOMMU, use 40-bit dma_mask. * On 64-bit systems with IOMMU, use 40-bit dma_mask.
* On 64-bit systems without IOMMU, use 64-bit dma_mask and * On 64-bit systems without IOMMU, use 64-bit dma_mask and
* do DMA address check in tg3_start_xmit(). * do DMA address check in __tg3_start_xmit().
*/ */
if (tg3_flag(tp, IS_5788)) if (tg3_flag(tp, IS_5788))
persist_dma_mask = dma_mask = DMA_BIT_MASK(32); persist_dma_mask = dma_mask = DMA_BIT_MASK(32);
@ -18127,7 +18158,8 @@ static void tg3_shutdown(struct pci_dev *pdev)
if (netif_running(dev)) if (netif_running(dev))
dev_close(dev); dev_close(dev);
tg3_power_down(tp); if (system_state == SYSTEM_POWER_OFF)
tg3_power_down(tp);
rtnl_unlock(); rtnl_unlock();

@ -2769,7 +2769,7 @@ static int enetc_setup_xdp_prog(struct net_device *ndev, struct bpf_prog *prog,
if (priv->min_num_stack_tx_queues + num_xdp_tx_queues > if (priv->min_num_stack_tx_queues + num_xdp_tx_queues >
priv->num_tx_rings) { priv->num_tx_rings) {
NL_SET_ERR_MSG_FMT_MOD(extack, NL_SET_ERR_MSG_FMT_MOD(extack,
"Reserving %d XDP TXQs does not leave a minimum of %d TXQs for network stack (total %d available)", "Reserving %d XDP TXQs does not leave a minimum of %d for stack (total %d)",
num_xdp_tx_queues, num_xdp_tx_queues,
priv->min_num_stack_tx_queues, priv->min_num_stack_tx_queues,
priv->num_tx_rings); priv->num_tx_rings);

@ -231,6 +231,5 @@ int i40e_devlink_create_port(struct i40e_pf *pf)
**/ **/
void i40e_devlink_destroy_port(struct i40e_pf *pf) void i40e_devlink_destroy_port(struct i40e_pf *pf)
{ {
devlink_port_type_clear(&pf->devlink_port);
devlink_port_unregister(&pf->devlink_port); devlink_port_unregister(&pf->devlink_port);
} }

@ -14213,8 +14213,7 @@ int i40e_vsi_release(struct i40e_vsi *vsi)
} }
set_bit(__I40E_VSI_RELEASING, vsi->state); set_bit(__I40E_VSI_RELEASING, vsi->state);
uplink_seid = vsi->uplink_seid; uplink_seid = vsi->uplink_seid;
if (vsi->type == I40E_VSI_MAIN)
i40e_devlink_destroy_port(pf);
if (vsi->type != I40E_VSI_SRIOV) { if (vsi->type != I40E_VSI_SRIOV) {
if (vsi->netdev_registered) { if (vsi->netdev_registered) {
vsi->netdev_registered = false; vsi->netdev_registered = false;
@ -14228,6 +14227,9 @@ int i40e_vsi_release(struct i40e_vsi *vsi)
i40e_vsi_disable_irq(vsi); i40e_vsi_disable_irq(vsi);
} }
if (vsi->type == I40E_VSI_MAIN)
i40e_devlink_destroy_port(pf);
spin_lock_bh(&vsi->mac_filter_hash_lock); spin_lock_bh(&vsi->mac_filter_hash_lock);
/* clear the sync flag on all filters */ /* clear the sync flag on all filters */
@ -14402,14 +14404,14 @@ static struct i40e_vsi *i40e_vsi_reinit_setup(struct i40e_vsi *vsi)
err_rings: err_rings:
i40e_vsi_free_q_vectors(vsi); i40e_vsi_free_q_vectors(vsi);
if (vsi->type == I40E_VSI_MAIN)
i40e_devlink_destroy_port(pf);
if (vsi->netdev_registered) { if (vsi->netdev_registered) {
vsi->netdev_registered = false; vsi->netdev_registered = false;
unregister_netdev(vsi->netdev); unregister_netdev(vsi->netdev);
free_netdev(vsi->netdev); free_netdev(vsi->netdev);
vsi->netdev = NULL; vsi->netdev = NULL;
} }
if (vsi->type == I40E_VSI_MAIN)
i40e_devlink_destroy_port(pf);
i40e_aq_delete_element(&pf->hw, vsi->seid, NULL); i40e_aq_delete_element(&pf->hw, vsi->seid, NULL);
err_vsi: err_vsi:
i40e_vsi_clear(vsi); i40e_vsi_clear(vsi);

@ -628,7 +628,7 @@ void ice_lag_move_new_vf_nodes(struct ice_vf *vf)
INIT_LIST_HEAD(&ndlist.node); INIT_LIST_HEAD(&ndlist.node);
rcu_read_lock(); rcu_read_lock();
for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) {
nl = kzalloc(sizeof(*nl), GFP_KERNEL); nl = kzalloc(sizeof(*nl), GFP_ATOMIC);
if (!nl) if (!nl)
break; break;
@ -1555,18 +1555,12 @@ static void ice_lag_chk_disabled_bond(struct ice_lag *lag, void *ptr)
*/ */
static void ice_lag_disable_sriov_bond(struct ice_lag *lag) static void ice_lag_disable_sriov_bond(struct ice_lag *lag)
{ {
struct ice_lag_netdev_list *entry;
struct ice_netdev_priv *np; struct ice_netdev_priv *np;
struct net_device *netdev;
struct ice_pf *pf; struct ice_pf *pf;
list_for_each_entry(entry, lag->netdev_head, node) { np = netdev_priv(lag->netdev);
netdev = entry->netdev; pf = np->vsi->back;
np = netdev_priv(netdev); ice_clear_feature_support(pf, ICE_F_SRIOV_LAG);
pf = np->vsi->back;
ice_clear_feature_support(pf, ICE_F_SRIOV_LAG);
}
} }
/** /**
@ -1698,7 +1692,7 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event,
rcu_read_lock(); rcu_read_lock();
for_each_netdev_in_bond_rcu(upper_netdev, tmp_nd) { for_each_netdev_in_bond_rcu(upper_netdev, tmp_nd) {
nd_list = kzalloc(sizeof(*nd_list), GFP_KERNEL); nd_list = kzalloc(sizeof(*nd_list), GFP_ATOMIC);
if (!nd_list) if (!nd_list)
break; break;
@ -2075,7 +2069,7 @@ void ice_lag_rebuild(struct ice_pf *pf)
INIT_LIST_HEAD(&ndlist.node); INIT_LIST_HEAD(&ndlist.node);
rcu_read_lock(); rcu_read_lock();
for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) {
nl = kzalloc(sizeof(*nl), GFP_KERNEL); nl = kzalloc(sizeof(*nl), GFP_ATOMIC);
if (!nl) if (!nl)
break; break;

@ -630,32 +630,83 @@ bool ice_is_tunnel_supported(struct net_device *dev)
return ice_tc_tun_get_type(dev) != TNL_LAST; return ice_tc_tun_get_type(dev) != TNL_LAST;
} }
static int static bool ice_tc_is_dev_uplink(struct net_device *dev)
ice_eswitch_tc_parse_action(struct ice_tc_flower_fltr *fltr, {
struct flow_action_entry *act) return netif_is_ice(dev) || ice_is_tunnel_supported(dev);
}
static int ice_tc_setup_redirect_action(struct net_device *filter_dev,
struct ice_tc_flower_fltr *fltr,
struct net_device *target_dev)
{ {
struct ice_repr *repr; struct ice_repr *repr;
fltr->action.fltr_act = ICE_FWD_TO_VSI;
if (ice_is_port_repr_netdev(filter_dev) &&
ice_is_port_repr_netdev(target_dev)) {
repr = ice_netdev_to_repr(target_dev);
fltr->dest_vsi = repr->src_vsi;
fltr->direction = ICE_ESWITCH_FLTR_EGRESS;
} else if (ice_is_port_repr_netdev(filter_dev) &&
ice_tc_is_dev_uplink(target_dev)) {
repr = ice_netdev_to_repr(filter_dev);
fltr->dest_vsi = repr->src_vsi->back->switchdev.uplink_vsi;
fltr->direction = ICE_ESWITCH_FLTR_EGRESS;
} else if (ice_tc_is_dev_uplink(filter_dev) &&
ice_is_port_repr_netdev(target_dev)) {
repr = ice_netdev_to_repr(target_dev);
fltr->dest_vsi = repr->src_vsi;
fltr->direction = ICE_ESWITCH_FLTR_INGRESS;
} else {
NL_SET_ERR_MSG_MOD(fltr->extack,
"Unsupported netdevice in switchdev mode");
return -EINVAL;
}
return 0;
}
static int
ice_tc_setup_drop_action(struct net_device *filter_dev,
struct ice_tc_flower_fltr *fltr)
{
fltr->action.fltr_act = ICE_DROP_PACKET;
if (ice_is_port_repr_netdev(filter_dev)) {
fltr->direction = ICE_ESWITCH_FLTR_EGRESS;
} else if (ice_tc_is_dev_uplink(filter_dev)) {
fltr->direction = ICE_ESWITCH_FLTR_INGRESS;
} else {
NL_SET_ERR_MSG_MOD(fltr->extack,
"Unsupported netdevice in switchdev mode");
return -EINVAL;
}
return 0;
}
static int ice_eswitch_tc_parse_action(struct net_device *filter_dev,
struct ice_tc_flower_fltr *fltr,
struct flow_action_entry *act)
{
int err;
switch (act->id) { switch (act->id) {
case FLOW_ACTION_DROP: case FLOW_ACTION_DROP:
fltr->action.fltr_act = ICE_DROP_PACKET; err = ice_tc_setup_drop_action(filter_dev, fltr);
if (err)
return err;
break; break;
case FLOW_ACTION_REDIRECT: case FLOW_ACTION_REDIRECT:
fltr->action.fltr_act = ICE_FWD_TO_VSI; err = ice_tc_setup_redirect_action(filter_dev, fltr, act->dev);
if (err)
if (ice_is_port_repr_netdev(act->dev)) { return err;
repr = ice_netdev_to_repr(act->dev);
fltr->dest_vsi = repr->src_vsi;
fltr->direction = ICE_ESWITCH_FLTR_INGRESS;
} else if (netif_is_ice(act->dev) ||
ice_is_tunnel_supported(act->dev)) {
fltr->direction = ICE_ESWITCH_FLTR_EGRESS;
} else {
NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported netdevice in switchdev mode");
return -EINVAL;
}
break; break;
@ -696,10 +747,6 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
goto exit; goto exit;
} }
/* egress traffic is always redirect to uplink */
if (fltr->direction == ICE_ESWITCH_FLTR_EGRESS)
fltr->dest_vsi = vsi->back->switchdev.uplink_vsi;
rule_info.sw_act.fltr_act = fltr->action.fltr_act; rule_info.sw_act.fltr_act = fltr->action.fltr_act;
if (fltr->action.fltr_act != ICE_DROP_PACKET) if (fltr->action.fltr_act != ICE_DROP_PACKET)
rule_info.sw_act.vsi_handle = fltr->dest_vsi->idx; rule_info.sw_act.vsi_handle = fltr->dest_vsi->idx;
@ -713,13 +760,21 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
rule_info.flags_info.act_valid = true; rule_info.flags_info.act_valid = true;
if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) { if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) {
/* Uplink to VF */
rule_info.sw_act.flag |= ICE_FLTR_RX; rule_info.sw_act.flag |= ICE_FLTR_RX;
rule_info.sw_act.src = hw->pf_id; rule_info.sw_act.src = hw->pf_id;
rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE; rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE;
} else { } else if (fltr->direction == ICE_ESWITCH_FLTR_EGRESS &&
fltr->dest_vsi == vsi->back->switchdev.uplink_vsi) {
/* VF to Uplink */
rule_info.sw_act.flag |= ICE_FLTR_TX; rule_info.sw_act.flag |= ICE_FLTR_TX;
rule_info.sw_act.src = vsi->idx; rule_info.sw_act.src = vsi->idx;
rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
} else {
/* VF to VF */
rule_info.sw_act.flag |= ICE_FLTR_TX;
rule_info.sw_act.src = vsi->idx;
rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE;
} }
/* specify the cookie as filter_rule_id */ /* specify the cookie as filter_rule_id */
@ -1745,16 +1800,17 @@ ice_tc_parse_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
/** /**
* ice_parse_tc_flower_actions - Parse the actions for a TC filter * ice_parse_tc_flower_actions - Parse the actions for a TC filter
* @filter_dev: Pointer to device on which filter is being added
* @vsi: Pointer to VSI * @vsi: Pointer to VSI
* @cls_flower: Pointer to TC flower offload structure * @cls_flower: Pointer to TC flower offload structure
* @fltr: Pointer to TC flower filter structure * @fltr: Pointer to TC flower filter structure
* *
* Parse the actions for a TC filter * Parse the actions for a TC filter
*/ */
static int static int ice_parse_tc_flower_actions(struct net_device *filter_dev,
ice_parse_tc_flower_actions(struct ice_vsi *vsi, struct ice_vsi *vsi,
struct flow_cls_offload *cls_flower, struct flow_cls_offload *cls_flower,
struct ice_tc_flower_fltr *fltr) struct ice_tc_flower_fltr *fltr)
{ {
struct flow_rule *rule = flow_cls_offload_flow_rule(cls_flower); struct flow_rule *rule = flow_cls_offload_flow_rule(cls_flower);
struct flow_action *flow_action = &rule->action; struct flow_action *flow_action = &rule->action;
@ -1769,7 +1825,7 @@ ice_parse_tc_flower_actions(struct ice_vsi *vsi,
flow_action_for_each(i, act, flow_action) { flow_action_for_each(i, act, flow_action) {
if (ice_is_eswitch_mode_switchdev(vsi->back)) if (ice_is_eswitch_mode_switchdev(vsi->back))
err = ice_eswitch_tc_parse_action(fltr, act); err = ice_eswitch_tc_parse_action(filter_dev, fltr, act);
else else
err = ice_tc_parse_action(vsi, fltr, act); err = ice_tc_parse_action(vsi, fltr, act);
if (err) if (err)
@ -1856,7 +1912,7 @@ ice_add_tc_fltr(struct net_device *netdev, struct ice_vsi *vsi,
if (err < 0) if (err < 0)
goto err; goto err;
err = ice_parse_tc_flower_actions(vsi, f, fltr); err = ice_parse_tc_flower_actions(netdev, vsi, f, fltr);
if (err < 0) if (err < 0)
goto err; goto err;

@ -2365,7 +2365,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
*/ */
int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off)
{ {
const struct skb_shared_info *shinfo = skb_shinfo(skb); const struct skb_shared_info *shinfo;
union { union {
struct iphdr *v4; struct iphdr *v4;
struct ipv6hdr *v6; struct ipv6hdr *v6;
@ -2379,13 +2379,15 @@ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off)
u32 paylen, l4_start; u32 paylen, l4_start;
int err; int err;
if (!shinfo->gso_size) if (!skb_is_gso(skb))
return 0; return 0;
err = skb_cow_head(skb, 0); err = skb_cow_head(skb, 0);
if (err < 0) if (err < 0)
return err; return err;
shinfo = skb_shinfo(skb);
ip.hdr = skb_network_header(skb); ip.hdr = skb_network_header(skb);
l4.hdr = skb_transport_header(skb); l4.hdr = skb_transport_header(skb);

@ -818,7 +818,6 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
int qidx, sqe_tail, sqe_head; int qidx, sqe_tail, sqe_head;
struct otx2_snd_queue *sq; struct otx2_snd_queue *sq;
u64 incr, *ptr, val; u64 incr, *ptr, val;
int timeout = 1000;
ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS); ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS);
for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) { for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
@ -827,15 +826,11 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
continue; continue;
incr = (u64)qidx << 32; incr = (u64)qidx << 32;
while (timeout) { val = otx2_atomic64_add(incr, ptr);
val = otx2_atomic64_add(incr, ptr); sqe_head = (val >> 20) & 0x3F;
sqe_head = (val >> 20) & 0x3F; sqe_tail = (val >> 28) & 0x3F;
sqe_tail = (val >> 28) & 0x3F; if (sqe_head != sqe_tail)
if (sqe_head == sqe_tail) usleep_range(50, 60);
break;
usleep_range(1, 3);
timeout--;
}
} }
} }

@ -977,6 +977,7 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en);
int otx2_txsch_alloc(struct otx2_nic *pfvf); int otx2_txsch_alloc(struct otx2_nic *pfvf);
void otx2_txschq_stop(struct otx2_nic *pfvf); void otx2_txschq_stop(struct otx2_nic *pfvf);
void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq); void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq);
void otx2_free_pending_sqe(struct otx2_nic *pfvf);
void otx2_sqb_flush(struct otx2_nic *pfvf); void otx2_sqb_flush(struct otx2_nic *pfvf);
int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
dma_addr_t *dma); dma_addr_t *dma);

@ -1193,31 +1193,32 @@ static char *nix_mnqerr_e_str[NIX_MNQERR_MAX] = {
}; };
static char *nix_snd_status_e_str[NIX_SND_STATUS_MAX] = { static char *nix_snd_status_e_str[NIX_SND_STATUS_MAX] = {
"NIX_SND_STATUS_GOOD", [NIX_SND_STATUS_GOOD] = "NIX_SND_STATUS_GOOD",
"NIX_SND_STATUS_SQ_CTX_FAULT", [NIX_SND_STATUS_SQ_CTX_FAULT] = "NIX_SND_STATUS_SQ_CTX_FAULT",
"NIX_SND_STATUS_SQ_CTX_POISON", [NIX_SND_STATUS_SQ_CTX_POISON] = "NIX_SND_STATUS_SQ_CTX_POISON",
"NIX_SND_STATUS_SQB_FAULT", [NIX_SND_STATUS_SQB_FAULT] = "NIX_SND_STATUS_SQB_FAULT",
"NIX_SND_STATUS_SQB_POISON", [NIX_SND_STATUS_SQB_POISON] = "NIX_SND_STATUS_SQB_POISON",
"NIX_SND_STATUS_HDR_ERR", [NIX_SND_STATUS_HDR_ERR] = "NIX_SND_STATUS_HDR_ERR",
"NIX_SND_STATUS_EXT_ERR", [NIX_SND_STATUS_EXT_ERR] = "NIX_SND_STATUS_EXT_ERR",
"NIX_SND_STATUS_JUMP_FAULT", [NIX_SND_STATUS_JUMP_FAULT] = "NIX_SND_STATUS_JUMP_FAULT",
"NIX_SND_STATUS_JUMP_POISON", [NIX_SND_STATUS_JUMP_POISON] = "NIX_SND_STATUS_JUMP_POISON",
"NIX_SND_STATUS_CRC_ERR", [NIX_SND_STATUS_CRC_ERR] = "NIX_SND_STATUS_CRC_ERR",
"NIX_SND_STATUS_IMM_ERR", [NIX_SND_STATUS_IMM_ERR] = "NIX_SND_STATUS_IMM_ERR",
"NIX_SND_STATUS_SG_ERR", [NIX_SND_STATUS_SG_ERR] = "NIX_SND_STATUS_SG_ERR",
"NIX_SND_STATUS_MEM_ERR", [NIX_SND_STATUS_MEM_ERR] = "NIX_SND_STATUS_MEM_ERR",
"NIX_SND_STATUS_INVALID_SUBDC", [NIX_SND_STATUS_INVALID_SUBDC] = "NIX_SND_STATUS_INVALID_SUBDC",
"NIX_SND_STATUS_SUBDC_ORDER_ERR", [NIX_SND_STATUS_SUBDC_ORDER_ERR] = "NIX_SND_STATUS_SUBDC_ORDER_ERR",
"NIX_SND_STATUS_DATA_FAULT", [NIX_SND_STATUS_DATA_FAULT] = "NIX_SND_STATUS_DATA_FAULT",
"NIX_SND_STATUS_DATA_POISON", [NIX_SND_STATUS_DATA_POISON] = "NIX_SND_STATUS_DATA_POISON",
"NIX_SND_STATUS_NPC_DROP_ACTION", [NIX_SND_STATUS_NPC_DROP_ACTION] = "NIX_SND_STATUS_NPC_DROP_ACTION",
"NIX_SND_STATUS_LOCK_VIOL", [NIX_SND_STATUS_LOCK_VIOL] = "NIX_SND_STATUS_LOCK_VIOL",
"NIX_SND_STATUS_NPC_UCAST_CHAN_ERR", [NIX_SND_STATUS_NPC_UCAST_CHAN_ERR] = "NIX_SND_STAT_NPC_UCAST_CHAN_ERR",
"NIX_SND_STATUS_NPC_MCAST_CHAN_ERR", [NIX_SND_STATUS_NPC_MCAST_CHAN_ERR] = "NIX_SND_STAT_NPC_MCAST_CHAN_ERR",
"NIX_SND_STATUS_NPC_MCAST_ABORT", [NIX_SND_STATUS_NPC_MCAST_ABORT] = "NIX_SND_STATUS_NPC_MCAST_ABORT",
"NIX_SND_STATUS_NPC_VTAG_PTR_ERR", [NIX_SND_STATUS_NPC_VTAG_PTR_ERR] = "NIX_SND_STATUS_NPC_VTAG_PTR_ERR",
"NIX_SND_STATUS_NPC_VTAG_SIZE_ERR", [NIX_SND_STATUS_NPC_VTAG_SIZE_ERR] = "NIX_SND_STATUS_NPC_VTAG_SIZE_ERR",
"NIX_SND_STATUS_SEND_STATS_ERR", [NIX_SND_STATUS_SEND_MEM_FAULT] = "NIX_SND_STATUS_SEND_MEM_FAULT",
[NIX_SND_STATUS_SEND_STATS_ERR] = "NIX_SND_STATUS_SEND_STATS_ERR",
}; };
static irqreturn_t otx2_q_intr_handler(int irq, void *data) static irqreturn_t otx2_q_intr_handler(int irq, void *data)
@ -1238,14 +1239,16 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
continue; continue;
if (val & BIT_ULL(42)) { if (val & BIT_ULL(42)) {
netdev_err(pf->netdev, "CQ%lld: error reading NIX_LF_CQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n", netdev_err(pf->netdev,
"CQ%lld: error reading NIX_LF_CQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n",
qidx, otx2_read64(pf, NIX_LF_ERR_INT)); qidx, otx2_read64(pf, NIX_LF_ERR_INT));
} else { } else {
if (val & BIT_ULL(NIX_CQERRINT_DOOR_ERR)) if (val & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
netdev_err(pf->netdev, "CQ%lld: Doorbell error", netdev_err(pf->netdev, "CQ%lld: Doorbell error",
qidx); qidx);
if (val & BIT_ULL(NIX_CQERRINT_CQE_FAULT)) if (val & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
netdev_err(pf->netdev, "CQ%lld: Memory fault on CQE write to LLC/DRAM", netdev_err(pf->netdev,
"CQ%lld: Memory fault on CQE write to LLC/DRAM",
qidx); qidx);
} }
@ -1272,7 +1275,8 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
(val & NIX_SQINT_BITS)); (val & NIX_SQINT_BITS));
if (val & BIT_ULL(42)) { if (val & BIT_ULL(42)) {
netdev_err(pf->netdev, "SQ%lld: error reading NIX_LF_SQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n", netdev_err(pf->netdev,
"SQ%lld: error reading NIX_LF_SQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n",
qidx, otx2_read64(pf, NIX_LF_ERR_INT)); qidx, otx2_read64(pf, NIX_LF_ERR_INT));
goto done; goto done;
} }
@ -1282,8 +1286,11 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
goto chk_mnq_err_dbg; goto chk_mnq_err_dbg;
sq_op_err_code = FIELD_GET(GENMASK(7, 0), sq_op_err_dbg); sq_op_err_code = FIELD_GET(GENMASK(7, 0), sq_op_err_dbg);
netdev_err(pf->netdev, "SQ%lld: NIX_LF_SQ_OP_ERR_DBG(%llx) err=%s\n", netdev_err(pf->netdev,
qidx, sq_op_err_dbg, nix_sqoperr_e_str[sq_op_err_code]); "SQ%lld: NIX_LF_SQ_OP_ERR_DBG(0x%llx) err=%s(%#x)\n",
qidx, sq_op_err_dbg,
nix_sqoperr_e_str[sq_op_err_code],
sq_op_err_code);
otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG, BIT_ULL(44)); otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG, BIT_ULL(44));
@ -1300,16 +1307,21 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
goto chk_snd_err_dbg; goto chk_snd_err_dbg;
mnq_err_code = FIELD_GET(GENMASK(7, 0), mnq_err_dbg); mnq_err_code = FIELD_GET(GENMASK(7, 0), mnq_err_dbg);
netdev_err(pf->netdev, "SQ%lld: NIX_LF_MNQ_ERR_DBG(%llx) err=%s\n", netdev_err(pf->netdev,
qidx, mnq_err_dbg, nix_mnqerr_e_str[mnq_err_code]); "SQ%lld: NIX_LF_MNQ_ERR_DBG(0x%llx) err=%s(%#x)\n",
qidx, mnq_err_dbg, nix_mnqerr_e_str[mnq_err_code],
mnq_err_code);
otx2_write64(pf, NIX_LF_MNQ_ERR_DBG, BIT_ULL(44)); otx2_write64(pf, NIX_LF_MNQ_ERR_DBG, BIT_ULL(44));
chk_snd_err_dbg: chk_snd_err_dbg:
snd_err_dbg = otx2_read64(pf, NIX_LF_SEND_ERR_DBG); snd_err_dbg = otx2_read64(pf, NIX_LF_SEND_ERR_DBG);
if (snd_err_dbg & BIT(44)) { if (snd_err_dbg & BIT(44)) {
snd_err_code = FIELD_GET(GENMASK(7, 0), snd_err_dbg); snd_err_code = FIELD_GET(GENMASK(7, 0), snd_err_dbg);
netdev_err(pf->netdev, "SQ%lld: NIX_LF_SND_ERR_DBG:0x%llx err=%s\n", netdev_err(pf->netdev,
qidx, snd_err_dbg, nix_snd_status_e_str[snd_err_code]); "SQ%lld: NIX_LF_SND_ERR_DBG:0x%llx err=%s(%#x)\n",
qidx, snd_err_dbg,
nix_snd_status_e_str[snd_err_code],
snd_err_code);
otx2_write64(pf, NIX_LF_SEND_ERR_DBG, BIT_ULL(44)); otx2_write64(pf, NIX_LF_SEND_ERR_DBG, BIT_ULL(44));
} }
@ -1589,6 +1601,7 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
else else
otx2_cleanup_tx_cqes(pf, cq); otx2_cleanup_tx_cqes(pf, cq);
} }
otx2_free_pending_sqe(pf);
otx2_free_sq_res(pf); otx2_free_sq_res(pf);

@ -318,23 +318,23 @@ enum nix_snd_status_e {
NIX_SND_STATUS_EXT_ERR = 0x6, NIX_SND_STATUS_EXT_ERR = 0x6,
NIX_SND_STATUS_JUMP_FAULT = 0x7, NIX_SND_STATUS_JUMP_FAULT = 0x7,
NIX_SND_STATUS_JUMP_POISON = 0x8, NIX_SND_STATUS_JUMP_POISON = 0x8,
NIX_SND_STATUS_CRC_ERR = 0x9, NIX_SND_STATUS_CRC_ERR = 0x10,
NIX_SND_STATUS_IMM_ERR = 0x10, NIX_SND_STATUS_IMM_ERR = 0x11,
NIX_SND_STATUS_SG_ERR = 0x11, NIX_SND_STATUS_SG_ERR = 0x12,
NIX_SND_STATUS_MEM_ERR = 0x12, NIX_SND_STATUS_MEM_ERR = 0x13,
NIX_SND_STATUS_INVALID_SUBDC = 0x13, NIX_SND_STATUS_INVALID_SUBDC = 0x14,
NIX_SND_STATUS_SUBDC_ORDER_ERR = 0x14, NIX_SND_STATUS_SUBDC_ORDER_ERR = 0x15,
NIX_SND_STATUS_DATA_FAULT = 0x15, NIX_SND_STATUS_DATA_FAULT = 0x16,
NIX_SND_STATUS_DATA_POISON = 0x16, NIX_SND_STATUS_DATA_POISON = 0x17,
NIX_SND_STATUS_NPC_DROP_ACTION = 0x17, NIX_SND_STATUS_NPC_DROP_ACTION = 0x20,
NIX_SND_STATUS_LOCK_VIOL = 0x18, NIX_SND_STATUS_LOCK_VIOL = 0x21,
NIX_SND_STATUS_NPC_UCAST_CHAN_ERR = 0x19, NIX_SND_STATUS_NPC_UCAST_CHAN_ERR = 0x22,
NIX_SND_STATUS_NPC_MCAST_CHAN_ERR = 0x20, NIX_SND_STATUS_NPC_MCAST_CHAN_ERR = 0x23,
NIX_SND_STATUS_NPC_MCAST_ABORT = 0x21, NIX_SND_STATUS_NPC_MCAST_ABORT = 0x24,
NIX_SND_STATUS_NPC_VTAG_PTR_ERR = 0x22, NIX_SND_STATUS_NPC_VTAG_PTR_ERR = 0x25,
NIX_SND_STATUS_NPC_VTAG_SIZE_ERR = 0x23, NIX_SND_STATUS_NPC_VTAG_SIZE_ERR = 0x26,
NIX_SND_STATUS_SEND_MEM_FAULT = 0x24, NIX_SND_STATUS_SEND_MEM_FAULT = 0x27,
NIX_SND_STATUS_SEND_STATS_ERR = 0x25, NIX_SND_STATUS_SEND_STATS_ERR = 0x28,
NIX_SND_STATUS_MAX, NIX_SND_STATUS_MAX,
}; };

@ -1247,9 +1247,11 @@ void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int q
void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq) void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq)
{ {
int tx_pkts = 0, tx_bytes = 0;
struct sk_buff *skb = NULL; struct sk_buff *skb = NULL;
struct otx2_snd_queue *sq; struct otx2_snd_queue *sq;
struct nix_cqe_tx_s *cqe; struct nix_cqe_tx_s *cqe;
struct netdev_queue *txq;
int processed_cqe = 0; int processed_cqe = 0;
struct sg_list *sg; struct sg_list *sg;
int qidx; int qidx;
@ -1270,12 +1272,20 @@ void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq)
sg = &sq->sg[cqe->comp.sqe_id]; sg = &sq->sg[cqe->comp.sqe_id];
skb = (struct sk_buff *)sg->skb; skb = (struct sk_buff *)sg->skb;
if (skb) { if (skb) {
tx_bytes += skb->len;
tx_pkts++;
otx2_dma_unmap_skb_frags(pfvf, sg); otx2_dma_unmap_skb_frags(pfvf, sg);
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
sg->skb = (u64)NULL; sg->skb = (u64)NULL;
} }
} }
if (likely(tx_pkts)) {
if (qidx >= pfvf->hw.tx_queues)
qidx -= pfvf->hw.xdp_queues;
txq = netdev_get_tx_queue(pfvf->netdev, qidx);
netdev_tx_completed_queue(txq, tx_pkts, tx_bytes);
}
/* Free CQEs to HW */ /* Free CQEs to HW */
otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR, otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR,
((u64)cq->cq_idx << 32) | processed_cqe); ((u64)cq->cq_idx << 32) | processed_cqe);
@ -1302,6 +1312,38 @@ int otx2_rxtx_enable(struct otx2_nic *pfvf, bool enable)
return err; return err;
} }
void otx2_free_pending_sqe(struct otx2_nic *pfvf)
{
int tx_pkts = 0, tx_bytes = 0;
struct sk_buff *skb = NULL;
struct otx2_snd_queue *sq;
struct netdev_queue *txq;
struct sg_list *sg;
int sq_idx, sqe;
for (sq_idx = 0; sq_idx < pfvf->hw.tx_queues; sq_idx++) {
sq = &pfvf->qset.sq[sq_idx];
for (sqe = 0; sqe < sq->sqe_cnt; sqe++) {
sg = &sq->sg[sqe];
skb = (struct sk_buff *)sg->skb;
if (skb) {
tx_bytes += skb->len;
tx_pkts++;
otx2_dma_unmap_skb_frags(pfvf, sg);
dev_kfree_skb_any(skb);
sg->skb = (u64)NULL;
}
}
if (!tx_pkts)
continue;
txq = netdev_get_tx_queue(pfvf->netdev, sq_idx);
netdev_tx_completed_queue(txq, tx_pkts, tx_bytes);
tx_pkts = 0;
tx_bytes = 0;
}
}
static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, u64 dma_addr, static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, u64 dma_addr,
int len, int *offset) int len, int *offset)
{ {

@ -2582,9 +2582,13 @@ static void rtl_set_rx_mode(struct net_device *dev)
if (dev->flags & IFF_PROMISC) { if (dev->flags & IFF_PROMISC) {
rx_mode |= AcceptAllPhys; rx_mode |= AcceptAllPhys;
} else if (!(dev->flags & IFF_MULTICAST)) {
rx_mode &= ~AcceptMulticast;
} else if (netdev_mc_count(dev) > MC_FILTER_LIMIT || } else if (netdev_mc_count(dev) > MC_FILTER_LIMIT ||
dev->flags & IFF_ALLMULTI || dev->flags & IFF_ALLMULTI ||
tp->mac_version == RTL_GIGA_MAC_VER_35) { tp->mac_version == RTL_GIGA_MAC_VER_35 ||
tp->mac_version == RTL_GIGA_MAC_VER_46 ||
tp->mac_version == RTL_GIGA_MAC_VER_48) {
/* accept all multicasts */ /* accept all multicasts */
} else if (netdev_mc_empty(dev)) { } else if (netdev_mc_empty(dev)) {
rx_mode &= ~AcceptMulticast; rx_mode &= ~AcceptMulticast;

@ -259,7 +259,7 @@
((val) << XGMAC_PPS_MINIDX(x)) ((val) << XGMAC_PPS_MINIDX(x))
#define XGMAC_PPSCMD_START 0x2 #define XGMAC_PPSCMD_START 0x2
#define XGMAC_PPSCMD_STOP 0x5 #define XGMAC_PPSCMD_STOP 0x5
#define XGMAC_PPSEN0 BIT(4) #define XGMAC_PPSENx(x) BIT(4 + (x) * 8)
#define XGMAC_PPSx_TARGET_TIME_SEC(x) (0x00000d80 + (x) * 0x10) #define XGMAC_PPSx_TARGET_TIME_SEC(x) (0x00000d80 + (x) * 0x10)
#define XGMAC_PPSx_TARGET_TIME_NSEC(x) (0x00000d84 + (x) * 0x10) #define XGMAC_PPSx_TARGET_TIME_NSEC(x) (0x00000d84 + (x) * 0x10)
#define XGMAC_TRGTBUSY0 BIT(31) #define XGMAC_TRGTBUSY0 BIT(31)

@ -1178,7 +1178,19 @@ static int dwxgmac2_flex_pps_config(void __iomem *ioaddr, int index,
val |= XGMAC_PPSCMDx(index, XGMAC_PPSCMD_START); val |= XGMAC_PPSCMDx(index, XGMAC_PPSCMD_START);
val |= XGMAC_TRGTMODSELx(index, XGMAC_PPSCMD_START); val |= XGMAC_TRGTMODSELx(index, XGMAC_PPSCMD_START);
val |= XGMAC_PPSEN0;
/* XGMAC Core has 4 PPS outputs at most.
*
* Prior XGMAC Core 3.20, Fixed mode or Flexible mode are selectable for
* PPS0 only via PPSEN0. PPS{1,2,3} are in Flexible mode by default,
* and can not be switched to Fixed mode, since PPSEN{1,2,3} are
* read-only reserved to 0.
* But we always set PPSEN{1,2,3} do not make things worse ;-)
*
* From XGMAC Core 3.20 and later, PPSEN{0,1,2,3} are writable and must
* be set, or the PPS outputs stay in Fixed PPS mode by default.
*/
val |= XGMAC_PPSENx(index);
writel(cfg->start.tv_sec, ioaddr + XGMAC_PPSx_TARGET_TIME_SEC(index)); writel(cfg->start.tv_sec, ioaddr + XGMAC_PPSx_TARGET_TIME_SEC(index));

@ -1588,10 +1588,10 @@ static void am65_cpsw_nuss_mac_link_up(struct phylink_config *config, struct phy
/* rx_pause/tx_pause */ /* rx_pause/tx_pause */
if (rx_pause) if (rx_pause)
mac_control |= CPSW_SL_CTL_RX_FLOW_EN; mac_control |= CPSW_SL_CTL_TX_FLOW_EN;
if (tx_pause) if (tx_pause)
mac_control |= CPSW_SL_CTL_TX_FLOW_EN; mac_control |= CPSW_SL_CTL_RX_FLOW_EN;
cpsw_sl_ctl_set(port->slave.mac_sl, mac_control); cpsw_sl_ctl_set(port->slave.mac_sl, mac_control);

@ -177,7 +177,7 @@ static void icss_iep_set_counter(struct icss_iep *iep, u64 ns)
if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
writel(upper_32_bits(ns), iep->base + writel(upper_32_bits(ns), iep->base +
iep->plat_data->reg_offs[ICSS_IEP_COUNT_REG1]); iep->plat_data->reg_offs[ICSS_IEP_COUNT_REG1]);
writel(upper_32_bits(ns), iep->base + iep->plat_data->reg_offs[ICSS_IEP_COUNT_REG0]); writel(lower_32_bits(ns), iep->base + iep->plat_data->reg_offs[ICSS_IEP_COUNT_REG0]);
} }
static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns); static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns);

@ -163,7 +163,6 @@ typedef void buffer_t;
/* Information about built-in Ethernet MAC interfaces */ /* Information about built-in Ethernet MAC interfaces */
struct eth_plat_info { struct eth_plat_info {
u8 phy; /* MII PHY ID, 0 - 31 */
u8 rxq; /* configurable, currently 0 - 31 only */ u8 rxq; /* configurable, currently 0 - 31 only */
u8 txreadyq; u8 txreadyq;
u8 hwaddr[ETH_ALEN]; u8 hwaddr[ETH_ALEN];
@ -1583,7 +1582,7 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
if ((err = register_netdev(ndev))) if ((err = register_netdev(ndev)))
goto err_phy_dis; goto err_phy_dis;
netdev_info(ndev, "%s: MII PHY %i on %s\n", ndev->name, plat->phy, netdev_info(ndev, "%s: MII PHY %s on %s\n", ndev->name, phydev_name(phydev),
npe_name(port->npe)); npe_name(port->npe));
return 0; return 0;

@ -16,6 +16,7 @@
MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>"); MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ACPI MDIO bus (Ethernet PHY) accessors");
/** /**
* __acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL. * __acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL.

@ -14,6 +14,7 @@
MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>"); MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("FWNODE MDIO bus (Ethernet PHY) accessors");
static struct pse_control * static struct pse_control *
fwnode_find_pse_control(struct fwnode_handle *fwnode) fwnode_find_pse_control(struct fwnode_handle *fwnode)

@ -205,3 +205,4 @@ module_platform_driver(aspeed_mdio_driver);
MODULE_AUTHOR("Andrew Jeffery <andrew@aj.id.au>"); MODULE_AUTHOR("Andrew Jeffery <andrew@aj.id.au>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ASPEED MDIO bus controller");

@ -263,3 +263,4 @@ void free_mdio_bitbang(struct mii_bus *bus)
EXPORT_SYMBOL(free_mdio_bitbang); EXPORT_SYMBOL(free_mdio_bitbang);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Bitbanged MDIO buses");

@ -25,6 +25,7 @@
MODULE_AUTHOR("Grant Likely <grant.likely@secretlab.ca>"); MODULE_AUTHOR("Grant Likely <grant.likely@secretlab.ca>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("OpenFirmware MDIO bus (Ethernet PHY) accessors");
/* Extract the clause 22 phy ID from the compatible string of the form /* Extract the clause 22 phy ID from the compatible string of the form
* ethernet-phy-idAAAA.BBBB */ * ethernet-phy-idAAAA.BBBB */

@ -942,3 +942,4 @@ struct bcm_ptp_private *bcm_ptp_probe(struct phy_device *phydev)
EXPORT_SYMBOL_GPL(bcm_ptp_probe); EXPORT_SYMBOL_GPL(bcm_ptp_probe);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Broadcom PHY PTP driver");

@ -223,3 +223,4 @@ static struct phy_driver bcm87xx_driver[] = {
module_phy_driver(bcm87xx_driver); module_phy_driver(bcm87xx_driver);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Broadcom BCM87xx PHY driver");

@ -1616,6 +1616,7 @@ struct phylink *phylink_create(struct phylink_config *config,
pl->config = config; pl->config = config;
if (config->type == PHYLINK_NETDEV) { if (config->type == PHYLINK_NETDEV) {
pl->netdev = to_net_dev(config->dev); pl->netdev = to_net_dev(config->dev);
netif_carrier_off(pl->netdev);
} else if (config->type == PHYLINK_DEV) { } else if (config->type == PHYLINK_DEV) {
pl->dev = config->dev; pl->dev = config->dev;
} else { } else {
@ -3726,3 +3727,4 @@ static int __init phylink_init(void)
module_init(phylink_init); module_init(phylink_init);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("phylink models the MAC to optional PHY connection");

@ -3153,3 +3153,4 @@ module_exit(sfp_exit);
MODULE_ALIAS("platform:sfp"); MODULE_ALIAS("platform:sfp");
MODULE_AUTHOR("Russell King"); MODULE_AUTHOR("Russell King");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("SFP cage support");

@ -570,8 +570,8 @@ static struct bpf_prog *get_filter(struct sock_fprog *uprog)
/* uprog->len is unsigned short, so no overflow here */ /* uprog->len is unsigned short, so no overflow here */
fprog.len = uprog->len; fprog.len = uprog->len;
fprog.filter = memdup_user(uprog->filter, fprog.filter = memdup_array_user(uprog->filter,
uprog->len * sizeof(struct sock_filter)); uprog->len, sizeof(struct sock_filter));
if (IS_ERR(fprog.filter)) if (IS_ERR(fprog.filter))
return ERR_CAST(fprog.filter); return ERR_CAST(fprog.filter);

@ -108,6 +108,7 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode)
container_of(pccontext->clk, struct ptp_clock, clock); container_of(pccontext->clk, struct ptp_clock, clock);
struct timestamp_event_queue *queue; struct timestamp_event_queue *queue;
char debugfsname[32]; char debugfsname[32];
unsigned long flags;
queue = kzalloc(sizeof(*queue), GFP_KERNEL); queue = kzalloc(sizeof(*queue), GFP_KERNEL);
if (!queue) if (!queue)
@ -119,7 +120,9 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode)
} }
bitmap_set(queue->mask, 0, PTP_MAX_CHANNELS); bitmap_set(queue->mask, 0, PTP_MAX_CHANNELS);
spin_lock_init(&queue->lock); spin_lock_init(&queue->lock);
spin_lock_irqsave(&ptp->tsevqs_lock, flags);
list_add_tail(&queue->qlist, &ptp->tsevqs); list_add_tail(&queue->qlist, &ptp->tsevqs);
spin_unlock_irqrestore(&ptp->tsevqs_lock, flags);
pccontext->private_clkdata = queue; pccontext->private_clkdata = queue;
/* Debugfs contents */ /* Debugfs contents */
@ -139,16 +142,16 @@ int ptp_release(struct posix_clock_context *pccontext)
{ {
struct timestamp_event_queue *queue = pccontext->private_clkdata; struct timestamp_event_queue *queue = pccontext->private_clkdata;
unsigned long flags; unsigned long flags;
struct ptp_clock *ptp =
container_of(pccontext->clk, struct ptp_clock, clock);
if (queue) { debugfs_remove(queue->debugfs_instance);
debugfs_remove(queue->debugfs_instance); pccontext->private_clkdata = NULL;
pccontext->private_clkdata = NULL; spin_lock_irqsave(&ptp->tsevqs_lock, flags);
spin_lock_irqsave(&queue->lock, flags); list_del(&queue->qlist);
list_del(&queue->qlist); spin_unlock_irqrestore(&ptp->tsevqs_lock, flags);
spin_unlock_irqrestore(&queue->lock, flags); bitmap_free(queue->mask);
bitmap_free(queue->mask); kfree(queue);
kfree(queue);
}
return 0; return 0;
} }
@ -585,7 +588,5 @@ ssize_t ptp_read(struct posix_clock_context *pccontext, uint rdflags,
free_event: free_event:
kfree(event); kfree(event);
exit: exit:
if (result < 0)
ptp_release(pccontext);
return result; return result;
} }

@ -179,11 +179,11 @@ static void ptp_clock_release(struct device *dev)
mutex_destroy(&ptp->pincfg_mux); mutex_destroy(&ptp->pincfg_mux);
mutex_destroy(&ptp->n_vclocks_mux); mutex_destroy(&ptp->n_vclocks_mux);
/* Delete first entry */ /* Delete first entry */
spin_lock_irqsave(&ptp->tsevqs_lock, flags);
tsevq = list_first_entry(&ptp->tsevqs, struct timestamp_event_queue, tsevq = list_first_entry(&ptp->tsevqs, struct timestamp_event_queue,
qlist); qlist);
spin_lock_irqsave(&tsevq->lock, flags);
list_del(&tsevq->qlist); list_del(&tsevq->qlist);
spin_unlock_irqrestore(&tsevq->lock, flags); spin_unlock_irqrestore(&ptp->tsevqs_lock, flags);
bitmap_free(tsevq->mask); bitmap_free(tsevq->mask);
kfree(tsevq); kfree(tsevq);
debugfs_remove(ptp->debugfs_root); debugfs_remove(ptp->debugfs_root);
@ -247,6 +247,7 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
if (!queue) if (!queue)
goto no_memory_queue; goto no_memory_queue;
list_add_tail(&queue->qlist, &ptp->tsevqs); list_add_tail(&queue->qlist, &ptp->tsevqs);
spin_lock_init(&ptp->tsevqs_lock);
queue->mask = bitmap_alloc(PTP_MAX_CHANNELS, GFP_KERNEL); queue->mask = bitmap_alloc(PTP_MAX_CHANNELS, GFP_KERNEL);
if (!queue->mask) if (!queue->mask)
goto no_memory_bitmap; goto no_memory_bitmap;
@ -407,6 +408,7 @@ void ptp_clock_event(struct ptp_clock *ptp, struct ptp_clock_event *event)
{ {
struct timestamp_event_queue *tsevq; struct timestamp_event_queue *tsevq;
struct pps_event_time evt; struct pps_event_time evt;
unsigned long flags;
switch (event->type) { switch (event->type) {
@ -415,10 +417,12 @@ void ptp_clock_event(struct ptp_clock *ptp, struct ptp_clock_event *event)
case PTP_CLOCK_EXTTS: case PTP_CLOCK_EXTTS:
/* Enqueue timestamp on selected queues */ /* Enqueue timestamp on selected queues */
spin_lock_irqsave(&ptp->tsevqs_lock, flags);
list_for_each_entry(tsevq, &ptp->tsevqs, qlist) { list_for_each_entry(tsevq, &ptp->tsevqs, qlist) {
if (test_bit((unsigned int)event->index, tsevq->mask)) if (test_bit((unsigned int)event->index, tsevq->mask))
enqueue_external_timestamp(tsevq, event); enqueue_external_timestamp(tsevq, event);
} }
spin_unlock_irqrestore(&ptp->tsevqs_lock, flags);
wake_up_interruptible(&ptp->tsev_wq); wake_up_interruptible(&ptp->tsev_wq);
break; break;

@ -44,6 +44,7 @@ struct ptp_clock {
struct pps_device *pps_source; struct pps_device *pps_source;
long dialed_frequency; /* remembers the frequency adjustment */ long dialed_frequency; /* remembers the frequency adjustment */
struct list_head tsevqs; /* timestamp fifo list */ struct list_head tsevqs; /* timestamp fifo list */
spinlock_t tsevqs_lock; /* protects tsevqs from concurrent access */
struct mutex pincfg_mux; /* protect concurrent info->pin_config access */ struct mutex pincfg_mux; /* protect concurrent info->pin_config access */
wait_queue_head_t tsev_wq; wait_queue_head_t tsev_wq;
int defunct; /* tells readers to go away when clock is being removed */ int defunct; /* tells readers to go away when clock is being removed */

@ -3675,7 +3675,7 @@ static void qeth_flush_queue(struct qeth_qdio_out_q *queue)
static void qeth_check_outbound_queue(struct qeth_qdio_out_q *queue) static void qeth_check_outbound_queue(struct qeth_qdio_out_q *queue)
{ {
/* /*
* check if weed have to switch to non-packing mode or if * check if we have to switch to non-packing mode or if
* we have to get a pci flag out on the queue * we have to get a pci flag out on the queue
*/ */
if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) || if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) ||

@ -84,6 +84,17 @@
*/ */
#define __bpf_kfunc __used noinline #define __bpf_kfunc __used noinline
#define __bpf_kfunc_start_defs() \
__diag_push(); \
__diag_ignore_all("-Wmissing-declarations", \
"Global kfuncs as their definitions will be in BTF");\
__diag_ignore_all("-Wmissing-prototypes", \
"Global kfuncs as their definitions will be in BTF")
#define __bpf_kfunc_end_defs() __diag_pop()
#define __bpf_hook_start() __bpf_kfunc_start_defs()
#define __bpf_hook_end() __bpf_kfunc_end_defs()
/* /*
* Return the name of the passed struct, if exists, or halt the build if for * Return the name of the passed struct, if exists, or halt the build if for
* example the structure gets renamed. In this way, developers have to revisit * example the structure gets renamed. In this way, developers have to revisit

@ -1045,10 +1045,10 @@ static inline int ethtool_mm_frag_size_min_to_add(u32 val_min, u32 *val_add,
/** /**
* ethtool_sprintf - Write formatted string to ethtool string data * ethtool_sprintf - Write formatted string to ethtool string data
* @data: Pointer to start of string to update * @data: Pointer to a pointer to the start of string to update
* @fmt: Format of string to write * @fmt: Format of string to write
* *
* Write formatted string to data. Update data to point at start of * Write formatted string to *data. Update *data to point at start of
* next string. * next string.
*/ */
extern __printf(2, 3) void ethtool_sprintf(u8 **data, const char *fmt, ...); extern __printf(2, 3) void ethtool_sprintf(u8 **data, const char *fmt, ...);

@ -200,7 +200,7 @@ static inline void idr_preload_end(void)
*/ */
#define idr_for_each_entry_ul(idr, entry, tmp, id) \ #define idr_for_each_entry_ul(idr, entry, tmp, id) \
for (tmp = 0, id = 0; \ for (tmp = 0, id = 0; \
tmp <= id && ((entry) = idr_get_next_ul(idr, &(id))) != NULL; \ ((entry) = tmp <= id ? idr_get_next_ul(idr, &(id)) : NULL) != NULL; \
tmp = id, ++id) tmp = id, ++id)
/** /**
@ -224,10 +224,12 @@ static inline void idr_preload_end(void)
* @id: Entry ID. * @id: Entry ID.
* *
* Continue to iterate over entries, continuing after the current position. * Continue to iterate over entries, continuing after the current position.
* After normal termination @entry is left with the value NULL. This
* is convenient for a "not found" value.
*/ */
#define idr_for_each_entry_continue_ul(idr, entry, tmp, id) \ #define idr_for_each_entry_continue_ul(idr, entry, tmp, id) \
for (tmp = id; \ for (tmp = id; \
tmp <= id && ((entry) = idr_get_next_ul(idr, &(id))) != NULL; \ ((entry) = tmp <= id ? idr_get_next_ul(idr, &(id)) : NULL) != NULL; \
tmp = id, ++id) tmp = id, ++id)
/* /*

@ -152,7 +152,7 @@ struct tcp_request_sock {
u64 snt_synack; /* first SYNACK sent time */ u64 snt_synack; /* first SYNACK sent time */
bool tfo_listener; bool tfo_listener;
bool is_mptcp; bool is_mptcp;
s8 req_usec_ts; bool req_usec_ts;
#if IS_ENABLED(CONFIG_MPTCP) #if IS_ENABLED(CONFIG_MPTCP)
bool drop_req; bool drop_req;
#endif #endif

@ -40,8 +40,8 @@ struct flowi_common {
#define FLOWI_FLAG_KNOWN_NH 0x02 #define FLOWI_FLAG_KNOWN_NH 0x02
__u32 flowic_secid; __u32 flowic_secid;
kuid_t flowic_uid; kuid_t flowic_uid;
struct flowi_tunnel flowic_tun_key;
__u32 flowic_multipath_hash; __u32 flowic_multipath_hash;
struct flowi_tunnel flowic_tun_key;
}; };
union flowi_uli { union flowi_uli {

@ -20,21 +20,6 @@ static inline struct nf_conn_act_ct_ext *nf_conn_act_ct_ext_find(const struct nf
#endif #endif
} }
static inline struct nf_conn_act_ct_ext *nf_conn_act_ct_ext_add(struct nf_conn *ct)
{
#if IS_ENABLED(CONFIG_NET_ACT_CT)
struct nf_conn_act_ct_ext *act_ct = nf_ct_ext_find(ct, NF_CT_EXT_ACT_CT);
if (act_ct)
return act_ct;
act_ct = nf_ct_ext_add(ct, NF_CT_EXT_ACT_CT, GFP_ATOMIC);
return act_ct;
#else
return NULL;
#endif
}
static inline void nf_conn_act_ct_ext_fill(struct sk_buff *skb, struct nf_conn *ct, static inline void nf_conn_act_ct_ext_fill(struct sk_buff *skb, struct nf_conn *ct,
enum ip_conntrack_info ctinfo) enum ip_conntrack_info ctinfo)
{ {
@ -47,4 +32,23 @@ static inline void nf_conn_act_ct_ext_fill(struct sk_buff *skb, struct nf_conn *
#endif #endif
} }
static inline struct
nf_conn_act_ct_ext *nf_conn_act_ct_ext_add(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo)
{
#if IS_ENABLED(CONFIG_NET_ACT_CT)
struct nf_conn_act_ct_ext *act_ct = nf_ct_ext_find(ct, NF_CT_EXT_ACT_CT);
if (act_ct)
return act_ct;
act_ct = nf_ct_ext_add(ct, NF_CT_EXT_ACT_CT, GFP_ATOMIC);
nf_conn_act_ct_ext_fill(skb, ct, ctinfo);
return act_ct;
#else
return NULL;
#endif
}
#endif /* _NF_CONNTRACK_ACT_CT_H */ #endif /* _NF_CONNTRACK_ACT_CT_H */

@ -124,7 +124,7 @@ struct tcp_ao_info {
#define tcp_hash_fail(msg, family, skb, fmt, ...) \ #define tcp_hash_fail(msg, family, skb, fmt, ...) \
do { \ do { \
const struct tcphdr *th = tcp_hdr(skb); \ const struct tcphdr *th = tcp_hdr(skb); \
char hdr_flags[5] = {}; \ char hdr_flags[6]; \
char *f = hdr_flags; \ char *f = hdr_flags; \
\ \
if (th->fin) \ if (th->fin) \
@ -133,17 +133,18 @@ do { \
*f++ = 'S'; \ *f++ = 'S'; \
if (th->rst) \ if (th->rst) \
*f++ = 'R'; \ *f++ = 'R'; \
if (th->psh) \
*f++ = 'P'; \
if (th->ack) \ if (th->ack) \
*f++ = 'A'; \ *f++ = '.'; \
if (f != hdr_flags) \ *f = 0; \
*f = ' '; \
if ((family) == AF_INET) { \ if ((family) == AF_INET) { \
net_info_ratelimited("%s for (%pI4, %d)->(%pI4, %d) %s" fmt "\n", \ net_info_ratelimited("%s for %pI4.%d->%pI4.%d [%s] " fmt "\n", \
msg, &ip_hdr(skb)->saddr, ntohs(th->source), \ msg, &ip_hdr(skb)->saddr, ntohs(th->source), \
&ip_hdr(skb)->daddr, ntohs(th->dest), \ &ip_hdr(skb)->daddr, ntohs(th->dest), \
hdr_flags, ##__VA_ARGS__); \ hdr_flags, ##__VA_ARGS__); \
} else { \ } else { \
net_info_ratelimited("%s for [%pI6c]:%u->[%pI6c]:%u %s" fmt "\n", \ net_info_ratelimited("%s for [%pI6c].%d->[%pI6c].%d [%s]" fmt "\n", \
msg, &ipv6_hdr(skb)->saddr, ntohs(th->source), \ msg, &ipv6_hdr(skb)->saddr, ntohs(th->source), \
&ipv6_hdr(skb)->daddr, ntohs(th->dest), \ &ipv6_hdr(skb)->daddr, ntohs(th->dest), \
hdr_flags, ##__VA_ARGS__); \ hdr_flags, ##__VA_ARGS__); \

@ -3,8 +3,8 @@
/* Documentation/netlink/specs/nfsd.yaml */ /* Documentation/netlink/specs/nfsd.yaml */
/* YNL-GEN uapi header */ /* YNL-GEN uapi header */
#ifndef _UAPI_LINUX_NFSD_H #ifndef _UAPI_LINUX_NFSD_NETLINK_H
#define _UAPI_LINUX_NFSD_H #define _UAPI_LINUX_NFSD_NETLINK_H
#define NFSD_FAMILY_NAME "nfsd" #define NFSD_FAMILY_NAME "nfsd"
#define NFSD_FAMILY_VERSION 1 #define NFSD_FAMILY_VERSION 1
@ -36,4 +36,4 @@ enum {
NFSD_CMD_MAX = (__NFSD_CMD_MAX - 1) NFSD_CMD_MAX = (__NFSD_CMD_MAX - 1)
}; };
#endif /* _UAPI_LINUX_NFSD_H */ #endif /* _UAPI_LINUX_NFSD_NETLINK_H */

@ -782,9 +782,7 @@ struct bpf_iter_num_kern {
int end; /* final value, exclusive */ int end; /* final value, exclusive */
} __aligned(8); } __aligned(8);
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __bpf_kfunc int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end)
{ {
@ -843,4 +841,4 @@ __bpf_kfunc void bpf_iter_num_destroy(struct bpf_iter_num *it)
s->cur = s->end = 0; s->cur = s->end = 0;
} }
__diag_pop(); __bpf_kfunc_end_defs();

@ -282,7 +282,7 @@ static struct bpf_iter_reg bpf_cgroup_reg_info = {
.ctx_arg_info_size = 1, .ctx_arg_info_size = 1,
.ctx_arg_info = { .ctx_arg_info = {
{ offsetof(struct bpf_iter__cgroup, cgroup), { offsetof(struct bpf_iter__cgroup, cgroup),
PTR_TO_BTF_ID_OR_NULL }, PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED },
}, },
.seq_info = &cgroup_iter_seq_info, .seq_info = &cgroup_iter_seq_info,
}; };
@ -305,9 +305,7 @@ struct bpf_iter_css_kern {
unsigned int flags; unsigned int flags;
} __attribute__((aligned(8))); } __attribute__((aligned(8)));
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_css_new(struct bpf_iter_css *it, __bpf_kfunc int bpf_iter_css_new(struct bpf_iter_css *it,
struct cgroup_subsys_state *start, unsigned int flags) struct cgroup_subsys_state *start, unsigned int flags)
@ -358,4 +356,4 @@ __bpf_kfunc void bpf_iter_css_destroy(struct bpf_iter_css *it)
{ {
} }
__diag_pop(); __bpf_kfunc_end_defs();

@ -34,9 +34,7 @@ static bool cpu_valid(u32 cpu)
return cpu < nr_cpu_ids; return cpu < nr_cpu_ids;
} }
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global kfuncs as their definitions will be in BTF");
/** /**
* bpf_cpumask_create() - Create a mutable BPF cpumask. * bpf_cpumask_create() - Create a mutable BPF cpumask.
@ -407,7 +405,7 @@ __bpf_kfunc u32 bpf_cpumask_any_and_distribute(const struct cpumask *src1,
return cpumask_any_and_distribute(src1, src2); return cpumask_any_and_distribute(src1, src2);
} }
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(cpumask_kfunc_btf_ids) BTF_SET8_START(cpumask_kfunc_btf_ids)
BTF_ID_FLAGS(func, bpf_cpumask_create, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cpumask_create, KF_ACQUIRE | KF_RET_NULL)

@ -1177,13 +1177,6 @@ BPF_CALL_3(bpf_timer_init, struct bpf_timer_kern *, timer, struct bpf_map *, map
ret = -EBUSY; ret = -EBUSY;
goto out; goto out;
} }
if (!atomic64_read(&map->usercnt)) {
/* maps with timers must be either held by user space
* or pinned in bpffs.
*/
ret = -EPERM;
goto out;
}
/* allocate hrtimer via map_kmalloc to use memcg accounting */ /* allocate hrtimer via map_kmalloc to use memcg accounting */
t = bpf_map_kmalloc_node(map, sizeof(*t), GFP_ATOMIC, map->numa_node); t = bpf_map_kmalloc_node(map, sizeof(*t), GFP_ATOMIC, map->numa_node);
if (!t) { if (!t) {
@ -1196,7 +1189,21 @@ BPF_CALL_3(bpf_timer_init, struct bpf_timer_kern *, timer, struct bpf_map *, map
rcu_assign_pointer(t->callback_fn, NULL); rcu_assign_pointer(t->callback_fn, NULL);
hrtimer_init(&t->timer, clockid, HRTIMER_MODE_REL_SOFT); hrtimer_init(&t->timer, clockid, HRTIMER_MODE_REL_SOFT);
t->timer.function = bpf_timer_cb; t->timer.function = bpf_timer_cb;
timer->timer = t; WRITE_ONCE(timer->timer, t);
/* Guarantee the order between timer->timer and map->usercnt. So
* when there are concurrent uref release and bpf timer init, either
* bpf_timer_cancel_and_free() called by uref release reads a no-NULL
* timer or atomic64_read() below returns a zero usercnt.
*/
smp_mb();
if (!atomic64_read(&map->usercnt)) {
/* maps with timers must be either held by user space
* or pinned in bpffs.
*/
WRITE_ONCE(timer->timer, NULL);
kfree(t);
ret = -EPERM;
}
out: out:
__bpf_spin_unlock_irqrestore(&timer->lock); __bpf_spin_unlock_irqrestore(&timer->lock);
return ret; return ret;
@ -1374,7 +1381,7 @@ void bpf_timer_cancel_and_free(void *val)
/* The subsequent bpf_timer_start/cancel() helpers won't be able to use /* The subsequent bpf_timer_start/cancel() helpers won't be able to use
* this timer, since it won't be initialized. * this timer, since it won't be initialized.
*/ */
timer->timer = NULL; WRITE_ONCE(timer->timer, NULL);
out: out:
__bpf_spin_unlock_irqrestore(&timer->lock); __bpf_spin_unlock_irqrestore(&timer->lock);
if (!t) if (!t)
@ -1886,9 +1893,7 @@ void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
} }
} }
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) __bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
{ {
@ -2505,7 +2510,7 @@ __bpf_kfunc void bpf_throw(u64 cookie)
WARN(1, "A call to BPF exception callback should never return\n"); WARN(1, "A call to BPF exception callback should never return\n");
} }
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(generic_btf_ids) BTF_SET8_START(generic_btf_ids)
#ifdef CONFIG_KEXEC_CORE #ifdef CONFIG_KEXEC_CORE
@ -2564,15 +2569,17 @@ BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_task_vma_new, KF_ITER_NEW | KF_RCU) BTF_ID_FLAGS(func, bpf_iter_task_vma_new, KF_ITER_NEW | KF_RCU)
BTF_ID_FLAGS(func, bpf_iter_task_vma_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_task_vma_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_task_vma_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_iter_task_vma_destroy, KF_ITER_DESTROY)
#ifdef CONFIG_CGROUPS
BTF_ID_FLAGS(func, bpf_iter_css_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS) BTF_ID_FLAGS(func, bpf_iter_css_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_iter_css_task_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_css_task_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_css_task_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_iter_css_task_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
BTF_ID_FLAGS(func, bpf_iter_task_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_task_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_css_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED) BTF_ID_FLAGS(func, bpf_iter_css_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
BTF_ID_FLAGS(func, bpf_iter_css_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_css_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_css_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_iter_css_destroy, KF_ITER_DESTROY)
#endif
BTF_ID_FLAGS(func, bpf_iter_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
BTF_ID_FLAGS(func, bpf_iter_task_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_task_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_dynptr_adjust) BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_ID_FLAGS(func, bpf_dynptr_is_null) BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly) BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)

@ -193,9 +193,7 @@ static int __init bpf_map_iter_init(void)
late_initcall(bpf_map_iter_init); late_initcall(bpf_map_iter_init);
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc s64 bpf_map_sum_elem_count(const struct bpf_map *map) __bpf_kfunc s64 bpf_map_sum_elem_count(const struct bpf_map *map)
{ {
@ -213,7 +211,7 @@ __bpf_kfunc s64 bpf_map_sum_elem_count(const struct bpf_map *map)
return ret; return ret;
} }
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(bpf_map_iter_kfunc_ids) BTF_SET8_START(bpf_map_iter_kfunc_ids)
BTF_ID_FLAGS(func, bpf_map_sum_elem_count, KF_TRUSTED_ARGS) BTF_ID_FLAGS(func, bpf_map_sum_elem_count, KF_TRUSTED_ARGS)

@ -704,7 +704,7 @@ static struct bpf_iter_reg task_reg_info = {
.ctx_arg_info_size = 1, .ctx_arg_info_size = 1,
.ctx_arg_info = { .ctx_arg_info = {
{ offsetof(struct bpf_iter__task, task), { offsetof(struct bpf_iter__task, task),
PTR_TO_BTF_ID_OR_NULL }, PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED },
}, },
.seq_info = &task_seq_info, .seq_info = &task_seq_info,
.fill_link_info = bpf_iter_fill_link_info, .fill_link_info = bpf_iter_fill_link_info,
@ -822,9 +822,7 @@ struct bpf_iter_task_vma_kern {
struct bpf_iter_task_vma_kern_data *data; struct bpf_iter_task_vma_kern_data *data;
} __attribute__((aligned(8))); } __attribute__((aligned(8)));
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it, __bpf_kfunc int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
struct task_struct *task, u64 addr) struct task_struct *task, u64 addr)
@ -890,7 +888,9 @@ __bpf_kfunc void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it)
} }
} }
__diag_pop(); __bpf_kfunc_end_defs();
#ifdef CONFIG_CGROUPS
struct bpf_iter_css_task { struct bpf_iter_css_task {
__u64 __opaque[1]; __u64 __opaque[1];
@ -900,9 +900,7 @@ struct bpf_iter_css_task_kern {
struct css_task_iter *css_it; struct css_task_iter *css_it;
} __attribute__((aligned(8))); } __attribute__((aligned(8)));
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_css_task_new(struct bpf_iter_css_task *it, __bpf_kfunc int bpf_iter_css_task_new(struct bpf_iter_css_task *it,
struct cgroup_subsys_state *css, unsigned int flags) struct cgroup_subsys_state *css, unsigned int flags)
@ -948,7 +946,9 @@ __bpf_kfunc void bpf_iter_css_task_destroy(struct bpf_iter_css_task *it)
bpf_mem_free(&bpf_global_ma, kit->css_it); bpf_mem_free(&bpf_global_ma, kit->css_it);
} }
__diag_pop(); __bpf_kfunc_end_defs();
#endif /* CONFIG_CGROUPS */
struct bpf_iter_task { struct bpf_iter_task {
__u64 __opaque[3]; __u64 __opaque[3];
@ -969,9 +969,7 @@ enum {
BPF_TASK_ITER_PROC_THREADS BPF_TASK_ITER_PROC_THREADS
}; };
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_task_new(struct bpf_iter_task *it, __bpf_kfunc int bpf_iter_task_new(struct bpf_iter_task *it,
struct task_struct *task__nullable, unsigned int flags) struct task_struct *task__nullable, unsigned int flags)
@ -1041,7 +1039,7 @@ __bpf_kfunc void bpf_iter_task_destroy(struct bpf_iter_task *it)
{ {
} }
__diag_pop(); __bpf_kfunc_end_defs();
DEFINE_PER_CPU(struct mmap_unlock_irq_work, mmap_unlock_work); DEFINE_PER_CPU(struct mmap_unlock_irq_work, mmap_unlock_work);

@ -3742,7 +3742,12 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
if (class == BPF_ALU || class == BPF_ALU64) { if (class == BPF_ALU || class == BPF_ALU64) {
if (!bt_is_reg_set(bt, dreg)) if (!bt_is_reg_set(bt, dreg))
return 0; return 0;
if (opcode == BPF_MOV) { if (opcode == BPF_END || opcode == BPF_NEG) {
/* sreg is reserved and unused
* dreg still need precision before this insn
*/
return 0;
} else if (opcode == BPF_MOV) {
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
/* dreg = sreg or dreg = (s8, s16, s32)sreg /* dreg = sreg or dreg = (s8, s16, s32)sreg
* dreg needs precision after this insn * dreg needs precision after this insn
@ -4674,7 +4679,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
insn->imm != 0 && env->bpf_capable) { insn->imm != 0 && env->bpf_capable) {
struct bpf_reg_state fake_reg = {}; struct bpf_reg_state fake_reg = {};
__mark_reg_known(&fake_reg, (u32)insn->imm); __mark_reg_known(&fake_reg, insn->imm);
fake_reg.type = SCALAR_VALUE; fake_reg.type = SCALAR_VALUE;
save_register_state(state, spi, &fake_reg, size); save_register_state(state, spi, &fake_reg, size);
} else if (reg && is_spillable_regtype(reg->type)) { } else if (reg && is_spillable_regtype(reg->type)) {
@ -5388,7 +5393,9 @@ static bool in_rcu_cs(struct bpf_verifier_env *env)
/* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */ /* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */
BTF_SET_START(rcu_protected_types) BTF_SET_START(rcu_protected_types)
BTF_ID(struct, prog_test_ref_kfunc) BTF_ID(struct, prog_test_ref_kfunc)
#ifdef CONFIG_CGROUPS
BTF_ID(struct, cgroup) BTF_ID(struct, cgroup)
#endif
BTF_ID(struct, bpf_cpumask) BTF_ID(struct, bpf_cpumask)
BTF_ID(struct, task_struct) BTF_ID(struct, task_struct)
BTF_SET_END(rcu_protected_types) BTF_SET_END(rcu_protected_types)
@ -10835,7 +10842,9 @@ BTF_ID(func, bpf_dynptr_clone)
BTF_ID(func, bpf_percpu_obj_new_impl) BTF_ID(func, bpf_percpu_obj_new_impl)
BTF_ID(func, bpf_percpu_obj_drop_impl) BTF_ID(func, bpf_percpu_obj_drop_impl)
BTF_ID(func, bpf_throw) BTF_ID(func, bpf_throw)
#ifdef CONFIG_CGROUPS
BTF_ID(func, bpf_iter_css_task_new) BTF_ID(func, bpf_iter_css_task_new)
#endif
BTF_SET_END(special_kfunc_set) BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list) BTF_ID_LIST(special_kfunc_list)
@ -10861,7 +10870,11 @@ BTF_ID(func, bpf_dynptr_clone)
BTF_ID(func, bpf_percpu_obj_new_impl) BTF_ID(func, bpf_percpu_obj_new_impl)
BTF_ID(func, bpf_percpu_obj_drop_impl) BTF_ID(func, bpf_percpu_obj_drop_impl)
BTF_ID(func, bpf_throw) BTF_ID(func, bpf_throw)
#ifdef CONFIG_CGROUPS
BTF_ID(func, bpf_iter_css_task_new) BTF_ID(func, bpf_iter_css_task_new)
#else
BTF_ID_UNUSED
#endif
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
{ {
@ -11394,6 +11407,12 @@ static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
&meta->arg_rbtree_root.field); &meta->arg_rbtree_root.field);
} }
/*
* css_task iter allowlist is needed to avoid dead locking on css_set_lock.
* LSM hooks and iters (both sleepable and non-sleepable) are safe.
* Any sleepable progs are also safe since bpf_check_attach_target() enforce
* them can only be attached to some specific hook points.
*/
static bool check_css_task_iter_allowlist(struct bpf_verifier_env *env) static bool check_css_task_iter_allowlist(struct bpf_verifier_env *env)
{ {
enum bpf_prog_type prog_type = resolve_prog_type(env->prog); enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
@ -11401,10 +11420,12 @@ static bool check_css_task_iter_allowlist(struct bpf_verifier_env *env)
switch (prog_type) { switch (prog_type) {
case BPF_PROG_TYPE_LSM: case BPF_PROG_TYPE_LSM:
return true; return true;
case BPF_TRACE_ITER: case BPF_PROG_TYPE_TRACING:
return env->prog->aux->sleepable; if (env->prog->expected_attach_type == BPF_TRACE_ITER)
return true;
fallthrough;
default: default:
return false; return env->prog->aux->sleepable;
} }
} }
@ -11663,7 +11684,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_ITER: case KF_ARG_PTR_TO_ITER:
if (meta->func_id == special_kfunc_list[KF_bpf_iter_css_task_new]) { if (meta->func_id == special_kfunc_list[KF_bpf_iter_css_task_new]) {
if (!check_css_task_iter_allowlist(env)) { if (!check_css_task_iter_allowlist(env)) {
verbose(env, "css_task_iter is only allowed in bpf_lsm and bpf iter-s\n"); verbose(env, "css_task_iter is only allowed in bpf_lsm, bpf_iter and sleepable progs\n");
return -EINVAL; return -EINVAL;
} }
} }

@ -156,19 +156,16 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
* optimize away the callsite. Therefore, __weak is needed to ensure that the * optimize away the callsite. Therefore, __weak is needed to ensure that the
* call is still emitted, by telling the compiler that we don't know what the * call is still emitted, by telling the compiler that we don't know what the
* function might eventually be. * function might eventually be.
*
* __diag_* below are needed to dismiss the missing prototype warning.
*/ */
__diag_push();
__diag_ignore_all("-Wmissing-prototypes", __bpf_hook_start();
"kfuncs which will be used in BPF programs");
__weak noinline void bpf_rstat_flush(struct cgroup *cgrp, __weak noinline void bpf_rstat_flush(struct cgroup *cgrp,
struct cgroup *parent, int cpu) struct cgroup *parent, int cpu)
{ {
} }
__diag_pop(); __bpf_hook_end();
/* see cgroup_rstat_flush() */ /* see cgroup_rstat_flush() */
static void cgroup_rstat_flush_locked(struct cgroup *cgrp) static void cgroup_rstat_flush_locked(struct cgroup *cgrp)

@ -1252,9 +1252,7 @@ static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = {
}; };
#ifdef CONFIG_KEYS #ifdef CONFIG_KEYS
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"kfuncs which will be used in BPF programs");
/** /**
* bpf_lookup_user_key - lookup a key by its serial * bpf_lookup_user_key - lookup a key by its serial
@ -1404,7 +1402,7 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
} }
#endif /* CONFIG_SYSTEM_DATA_VERIFICATION */ #endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(key_sig_kfunc_set) BTF_SET8_START(key_sig_kfunc_set)
BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)

@ -503,9 +503,8 @@ static int bpf_test_finish(const union bpf_attr *kattr,
* architecture dependent calling conventions. 7+ can be supported in the * architecture dependent calling conventions. 7+ can be supported in the
* future. * future.
*/ */
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_fentry_test1(int a) __bpf_kfunc int bpf_fentry_test1(int a)
{ {
return a + 1; return a + 1;
@ -605,7 +604,7 @@ __bpf_kfunc void bpf_kfunc_call_memb_release(struct prog_test_member *p)
{ {
} }
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(bpf_test_modify_return_ids) BTF_SET8_START(bpf_test_modify_return_ids)
BTF_ID_FLAGS(func, bpf_modify_return_test) BTF_ID_FLAGS(func, bpf_modify_return_test)

@ -135,3 +135,4 @@ static void __exit ebtable_broute_fini(void)
module_init(ebtable_broute_init); module_init(ebtable_broute_init);
module_exit(ebtable_broute_fini); module_exit(ebtable_broute_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Force packets to be routed instead of bridged");

@ -116,3 +116,4 @@ static void __exit ebtable_filter_fini(void)
module_init(ebtable_filter_init); module_init(ebtable_filter_init);
module_exit(ebtable_filter_fini); module_exit(ebtable_filter_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ebtables legacy filter table");

@ -116,3 +116,4 @@ static void __exit ebtable_nat_fini(void)
module_init(ebtable_nat_init); module_init(ebtable_nat_init);
module_exit(ebtable_nat_fini); module_exit(ebtable_nat_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ebtables legacy stateless nat table");

@ -2595,3 +2595,4 @@ EXPORT_SYMBOL(ebt_do_table);
module_init(ebtables_init); module_init(ebtables_init);
module_exit(ebtables_fini); module_exit(ebtables_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ebtables legacy core");

@ -416,3 +416,4 @@ module_exit(nf_conntrack_l3proto_bridge_fini);
MODULE_ALIAS("nf_conntrack-" __stringify(AF_BRIDGE)); MODULE_ALIAS("nf_conntrack-" __stringify(AF_BRIDGE));
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Bridge IPv4 and IPv6 connection tracking");

@ -11767,9 +11767,7 @@ bpf_sk_base_func_proto(enum bpf_func_id func_id)
return func; return func;
} }
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_dynptr_from_skb(struct sk_buff *skb, u64 flags, __bpf_kfunc int bpf_dynptr_from_skb(struct sk_buff *skb, u64 flags,
struct bpf_dynptr_kern *ptr__uninit) struct bpf_dynptr_kern *ptr__uninit)
{ {
@ -11816,7 +11814,7 @@ __bpf_kfunc int bpf_sock_addr_set_sun_path(struct bpf_sock_addr_kern *sa_kern,
return 0; return 0;
} }
__diag_pop(); __bpf_kfunc_end_defs();
int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags, int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
struct bpf_dynptr_kern *ptr__uninit) struct bpf_dynptr_kern *ptr__uninit)
@ -11879,10 +11877,7 @@ static int __init bpf_kfunc_init(void)
} }
late_initcall(bpf_kfunc_init); late_initcall(bpf_kfunc_init);
/* Disables missing prototype warnings */ __bpf_kfunc_start_defs();
__diag_push();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
/* bpf_sock_destroy: Destroy the given socket with ECONNABORTED error code. /* bpf_sock_destroy: Destroy the given socket with ECONNABORTED error code.
* *
@ -11916,7 +11911,7 @@ __bpf_kfunc int bpf_sock_destroy(struct sock_common *sock)
return sk->sk_prot->diag_destroy(sk, ECONNABORTED); return sk->sk_prot->diag_destroy(sk, ECONNABORTED);
} }
__diag_pop() __bpf_kfunc_end_defs();
BTF_SET8_START(bpf_sk_iter_kfunc_ids) BTF_SET8_START(bpf_sk_iter_kfunc_ids)
BTF_ID_FLAGS(func, bpf_sock_destroy, KF_TRUSTED_ARGS) BTF_ID_FLAGS(func, bpf_sock_destroy, KF_TRUSTED_ARGS)

@ -217,8 +217,12 @@ static int page_pool_init(struct page_pool *pool,
return -ENOMEM; return -ENOMEM;
#endif #endif
if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) {
#ifdef CONFIG_PAGE_POOL_STATS
free_percpu(pool->recycle_stats);
#endif
return -ENOMEM; return -ENOMEM;
}
atomic_set(&pool->pages_state_release_cnt, 0); atomic_set(&pool->pages_state_release_cnt, 0);

@ -696,9 +696,7 @@ struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf)
return nxdpf; return nxdpf;
} }
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
/** /**
* bpf_xdp_metadata_rx_timestamp - Read XDP frame RX timestamp. * bpf_xdp_metadata_rx_timestamp - Read XDP frame RX timestamp.
@ -738,7 +736,7 @@ __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
__diag_pop(); __bpf_kfunc_end_defs();
BTF_SET8_START(xdp_metadata_kfunc_ids) BTF_SET8_START(xdp_metadata_kfunc_ids)
#define XDP_METADATA_KFUNC(_, __, name, ___) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS) #define XDP_METADATA_KFUNC(_, __, name, ___) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS)

@ -629,9 +629,6 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
if (dccp_parse_options(sk, dreq, skb)) if (dccp_parse_options(sk, dreq, skb))
goto drop_and_free; goto drop_and_free;
if (security_inet_conn_request(sk, skb, req))
goto drop_and_free;
ireq = inet_rsk(req); ireq = inet_rsk(req);
sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr); sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr);
sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr); sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr);
@ -639,6 +636,9 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
ireq->ireq_family = AF_INET; ireq->ireq_family = AF_INET;
ireq->ir_iif = READ_ONCE(sk->sk_bound_dev_if); ireq->ir_iif = READ_ONCE(sk->sk_bound_dev_if);
if (security_inet_conn_request(sk, skb, req))
goto drop_and_free;
/* /*
* Step 3: Process LISTEN state * Step 3: Process LISTEN state
* *

@ -360,15 +360,15 @@ static int dccp_v6_conn_request(struct sock *sk, struct sk_buff *skb)
if (dccp_parse_options(sk, dreq, skb)) if (dccp_parse_options(sk, dreq, skb))
goto drop_and_free; goto drop_and_free;
if (security_inet_conn_request(sk, skb, req))
goto drop_and_free;
ireq = inet_rsk(req); ireq = inet_rsk(req);
ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr; ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr; ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
ireq->ireq_family = AF_INET6; ireq->ireq_family = AF_INET6;
ireq->ir_mark = inet_request_mark(sk, skb); ireq->ir_mark = inet_request_mark(sk, skb);
if (security_inet_conn_request(sk, skb, req))
goto drop_and_free;
if (ipv6_opt_accepted(sk, skb, IP6CB(skb)) || if (ipv6_opt_accepted(sk, skb, IP6CB(skb)) ||
np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo || np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) { np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {

@ -15,7 +15,7 @@ const struct nla_policy devlink_dl_port_function_nl_policy[DEVLINK_PORT_FN_ATTR_
[DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR] = { .type = NLA_BINARY, }, [DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR] = { .type = NLA_BINARY, },
[DEVLINK_PORT_FN_ATTR_STATE] = NLA_POLICY_MAX(NLA_U8, 1), [DEVLINK_PORT_FN_ATTR_STATE] = NLA_POLICY_MAX(NLA_U8, 1),
[DEVLINK_PORT_FN_ATTR_OPSTATE] = NLA_POLICY_MAX(NLA_U8, 1), [DEVLINK_PORT_FN_ATTR_OPSTATE] = NLA_POLICY_MAX(NLA_U8, 1),
[DEVLINK_PORT_FN_ATTR_CAPS] = NLA_POLICY_BITFIELD32(3), [DEVLINK_PORT_FN_ATTR_CAPS] = NLA_POLICY_BITFIELD32(15),
}; };
const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1] = { const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1] = {

@ -342,9 +342,7 @@ struct sk_buff *prp_create_tagged_frame(struct hsr_frame_info *frame,
skb = skb_copy_expand(frame->skb_std, 0, skb = skb_copy_expand(frame->skb_std, 0,
skb_tailroom(frame->skb_std) + HSR_HLEN, skb_tailroom(frame->skb_std) + HSR_HLEN,
GFP_ATOMIC); GFP_ATOMIC);
prp_fill_rct(skb, frame, port); return prp_fill_rct(skb, frame, port);
return skb;
} }
static void hsr_deliver_master(struct sk_buff *skb, struct net_device *dev, static void hsr_deliver_master(struct sk_buff *skb, struct net_device *dev,

@ -22,9 +22,7 @@ enum bpf_fou_encap_type {
FOU_BPF_ENCAP_GUE, FOU_BPF_ENCAP_GUE,
}; };
__diag_push(); __bpf_kfunc_start_defs();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in BTF");
/* bpf_skb_set_fou_encap - Set FOU encap parameters /* bpf_skb_set_fou_encap - Set FOU encap parameters
* *
@ -100,7 +98,7 @@ __bpf_kfunc int bpf_skb_get_fou_encap(struct __sk_buff *skb_ctx,
return 0; return 0;
} }
__diag_pop() __bpf_kfunc_end_defs();
BTF_SET8_START(fou_kfunc_set) BTF_SET8_START(fou_kfunc_set)
BTF_ID_FLAGS(func, bpf_skb_set_fou_encap) BTF_ID_FLAGS(func, bpf_skb_set_fou_encap)

@ -170,3 +170,4 @@ module_init(iptable_nat_init);
module_exit(iptable_nat_exit); module_exit(iptable_nat_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("iptables legacy nat table");

@ -108,3 +108,4 @@ static void __exit iptable_raw_fini(void)
module_init(iptable_raw_init); module_init(iptable_raw_init);
module_exit(iptable_raw_fini); module_exit(iptable_raw_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("iptables legacy raw table");

@ -186,3 +186,4 @@ module_init(nf_defrag_init);
module_exit(nf_defrag_fini); module_exit(nf_defrag_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("IPv4 defragmentation support");

@ -336,3 +336,4 @@ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook)
EXPORT_SYMBOL_GPL(nf_send_unreach); EXPORT_SYMBOL_GPL(nf_send_unreach);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("IPv4 packet rejection core");

@ -306,7 +306,7 @@ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
treq->af_specific = af_ops; treq->af_specific = af_ops;
treq->syn_tos = TCP_SKB_CB(skb)->ip_dsfield; treq->syn_tos = TCP_SKB_CB(skb)->ip_dsfield;
treq->req_usec_ts = -1; treq->req_usec_ts = false;
#if IS_ENABLED(CONFIG_MPTCP) #if IS_ENABLED(CONFIG_MPTCP)
treq->is_mptcp = sk_is_mptcp(sk); treq->is_mptcp = sk_is_mptcp(sk);

@ -1315,7 +1315,8 @@ static int tcp_ao_parse_crypto(struct tcp_ao_add *cmd, struct tcp_ao_key *key)
key->maclen = cmd->maclen ?: 12; /* 12 is the default in RFC5925 */ key->maclen = cmd->maclen ?: 12; /* 12 is the default in RFC5925 */
/* Check: maclen + tcp-ao header <= (MAX_TCP_OPTION_SPACE - mss /* Check: maclen + tcp-ao header <= (MAX_TCP_OPTION_SPACE - mss
* - tstamp - wscale - sackperm), * - tstamp (including sackperm)
* - wscale),
* see tcp_syn_options(), tcp_synack_options(), commit 33ad798c924b. * see tcp_syn_options(), tcp_synack_options(), commit 33ad798c924b.
* *
* In order to allow D-SACK with TCP-AO, the header size should be: * In order to allow D-SACK with TCP-AO, the header size should be:
@ -1342,9 +1343,9 @@ static int tcp_ao_parse_crypto(struct tcp_ao_add *cmd, struct tcp_ao_key *key)
* large to leave sufficient option space. * large to leave sufficient option space.
*/ */
syn_tcp_option_space = MAX_TCP_OPTION_SPACE; syn_tcp_option_space = MAX_TCP_OPTION_SPACE;
syn_tcp_option_space -= TCPOLEN_MSS_ALIGNED;
syn_tcp_option_space -= TCPOLEN_TSTAMP_ALIGNED; syn_tcp_option_space -= TCPOLEN_TSTAMP_ALIGNED;
syn_tcp_option_space -= TCPOLEN_WSCALE_ALIGNED; syn_tcp_option_space -= TCPOLEN_WSCALE_ALIGNED;
syn_tcp_option_space -= TCPOLEN_SACKPERM_ALIGNED;
if (tcp_ao_len(key) > syn_tcp_option_space) { if (tcp_ao_len(key) > syn_tcp_option_space) {
err = -EMSGSIZE; err = -EMSGSIZE;
goto err_kfree; goto err_kfree;

@ -7115,7 +7115,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
req->syncookie = want_cookie; req->syncookie = want_cookie;
tcp_rsk(req)->af_specific = af_ops; tcp_rsk(req)->af_specific = af_ops;
tcp_rsk(req)->ts_off = 0; tcp_rsk(req)->ts_off = 0;
tcp_rsk(req)->req_usec_ts = -1; tcp_rsk(req)->req_usec_ts = false;
#if IS_ENABLED(CONFIG_MPTCP) #if IS_ENABLED(CONFIG_MPTCP)
tcp_rsk(req)->is_mptcp = 0; tcp_rsk(req)->is_mptcp = 0;
#endif #endif
@ -7143,9 +7143,10 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
if (!dst) if (!dst)
goto drop_and_free; goto drop_and_free;
if (tmp_opt.tstamp_ok) if (tmp_opt.tstamp_ok) {
tcp_rsk(req)->req_usec_ts = dst_tcp_usec_ts(dst);
tcp_rsk(req)->ts_off = af_ops->init_ts_off(net, skb); tcp_rsk(req)->ts_off = af_ops->init_ts_off(net, skb);
}
if (!want_cookie && !isn) { if (!want_cookie && !isn) {
int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog); int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog);

@ -601,6 +601,44 @@ static void bpf_skops_write_hdr_opt(struct sock *sk, struct sk_buff *skb,
} }
#endif #endif
static __be32 *process_tcp_ao_options(struct tcp_sock *tp,
const struct tcp_request_sock *tcprsk,
struct tcp_out_options *opts,
struct tcp_key *key, __be32 *ptr)
{
#ifdef CONFIG_TCP_AO
u8 maclen = tcp_ao_maclen(key->ao_key);
if (tcprsk) {
u8 aolen = maclen + sizeof(struct tcp_ao_hdr);
*ptr++ = htonl((TCPOPT_AO << 24) | (aolen << 16) |
(tcprsk->ao_keyid << 8) |
(tcprsk->ao_rcv_next));
} else {
struct tcp_ao_key *rnext_key;
struct tcp_ao_info *ao_info;
ao_info = rcu_dereference_check(tp->ao_info,
lockdep_sock_is_held(&tp->inet_conn.icsk_inet.sk));
rnext_key = READ_ONCE(ao_info->rnext_key);
if (WARN_ON_ONCE(!rnext_key))
return ptr;
*ptr++ = htonl((TCPOPT_AO << 24) |
(tcp_ao_len(key->ao_key) << 16) |
(key->ao_key->sndid << 8) |
(rnext_key->rcvid));
}
opts->hash_location = (__u8 *)ptr;
ptr += maclen / sizeof(*ptr);
if (unlikely(maclen % sizeof(*ptr))) {
memset(ptr, TCPOPT_NOP, sizeof(*ptr));
ptr++;
}
#endif
return ptr;
}
/* Write previously computed TCP options to the packet. /* Write previously computed TCP options to the packet.
* *
* Beware: Something in the Internet is very sensitive to the ordering of * Beware: Something in the Internet is very sensitive to the ordering of
@ -629,37 +667,7 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp,
opts->hash_location = (__u8 *)ptr; opts->hash_location = (__u8 *)ptr;
ptr += 4; ptr += 4;
} else if (tcp_key_is_ao(key)) { } else if (tcp_key_is_ao(key)) {
#ifdef CONFIG_TCP_AO ptr = process_tcp_ao_options(tp, tcprsk, opts, key, ptr);
u8 maclen = tcp_ao_maclen(key->ao_key);
if (tcprsk) {
u8 aolen = maclen + sizeof(struct tcp_ao_hdr);
*ptr++ = htonl((TCPOPT_AO << 24) | (aolen << 16) |
(tcprsk->ao_keyid << 8) |
(tcprsk->ao_rcv_next));
} else {
struct tcp_ao_key *rnext_key;
struct tcp_ao_info *ao_info;
ao_info = rcu_dereference_check(tp->ao_info,
lockdep_sock_is_held(&tp->inet_conn.icsk_inet.sk));
rnext_key = READ_ONCE(ao_info->rnext_key);
if (WARN_ON_ONCE(!rnext_key))
goto out_ao;
*ptr++ = htonl((TCPOPT_AO << 24) |
(tcp_ao_len(key->ao_key) << 16) |
(key->ao_key->sndid << 8) |
(rnext_key->rcvid));
}
opts->hash_location = (__u8 *)ptr;
ptr += maclen / sizeof(*ptr);
if (unlikely(maclen % sizeof(*ptr))) {
memset(ptr, TCPOPT_NOP, sizeof(*ptr));
ptr++;
}
out_ao:
#endif
} }
if (unlikely(opts->mss)) { if (unlikely(opts->mss)) {
*ptr++ = htonl((TCPOPT_MSS << 24) | *ptr++ = htonl((TCPOPT_MSS << 24) |
@ -3693,8 +3701,6 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
mss = tcp_mss_clamp(tp, dst_metric_advmss(dst)); mss = tcp_mss_clamp(tp, dst_metric_advmss(dst));
memset(&opts, 0, sizeof(opts)); memset(&opts, 0, sizeof(opts));
if (tcp_rsk(req)->req_usec_ts < 0)
tcp_rsk(req)->req_usec_ts = dst_tcp_usec_ts(dst);
now = tcp_clock_ns(); now = tcp_clock_ns();
#ifdef CONFIG_SYN_COOKIES #ifdef CONFIG_SYN_COOKIES
if (unlikely(synack_type == TCP_SYNACK_COOKIE && ireq->tstamp_ok)) if (unlikely(synack_type == TCP_SYNACK_COOKIE && ireq->tstamp_ok))

@ -231,7 +231,7 @@ static void cpool_schedule_cleanup(struct kref *kref)
*/ */
void tcp_sigpool_release(unsigned int id) void tcp_sigpool_release(unsigned int id)
{ {
if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) if (WARN_ON_ONCE(id >= cpool_populated || !cpool[id].alg))
return; return;
/* slow-path */ /* slow-path */
@ -245,7 +245,7 @@ EXPORT_SYMBOL_GPL(tcp_sigpool_release);
*/ */
void tcp_sigpool_get(unsigned int id) void tcp_sigpool_get(unsigned int id)
{ {
if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) if (WARN_ON_ONCE(id >= cpool_populated || !cpool[id].alg))
return; return;
kref_get(&cpool[id].kref); kref_get(&cpool[id].kref);
} }
@ -256,7 +256,7 @@ int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RC
struct crypto_ahash *hash; struct crypto_ahash *hash;
rcu_read_lock_bh(); rcu_read_lock_bh();
if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) { if (WARN_ON_ONCE(id >= cpool_populated || !cpool[id].alg)) {
rcu_read_unlock_bh(); rcu_read_unlock_bh();
return -EINVAL; return -EINVAL;
} }
@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(tcp_sigpool_end);
*/ */
size_t tcp_sigpool_algo(unsigned int id, char *buf, size_t buf_len) size_t tcp_sigpool_algo(unsigned int id, char *buf, size_t buf_len)
{ {
if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) if (WARN_ON_ONCE(id >= cpool_populated || !cpool[id].alg))
return -EINVAL; return -EINVAL;
return strscpy(buf, cpool[id].alg, buf_len); return strscpy(buf, cpool[id].alg, buf_len);

@ -170,3 +170,4 @@ module_init(ip6table_nat_init);
module_exit(ip6table_nat_exit); module_exit(ip6table_nat_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Ip6tables legacy nat table");

@ -106,3 +106,4 @@ static void __exit ip6table_raw_fini(void)
module_init(ip6table_raw_init); module_init(ip6table_raw_init);
module_exit(ip6table_raw_fini); module_exit(ip6table_raw_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Ip6tables legacy raw table");

@ -182,3 +182,4 @@ module_init(nf_defrag_init);
module_exit(nf_defrag_fini); module_exit(nf_defrag_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("IPv6 defragmentation support");

@ -413,3 +413,4 @@ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in,
EXPORT_SYMBOL_GPL(nf_send_unreach6); EXPORT_SYMBOL_GPL(nf_send_unreach6);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("IPv6 packet rejection core");

@ -181,14 +181,15 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
treq = tcp_rsk(req); treq = tcp_rsk(req);
treq->tfo_listener = false; treq->tfo_listener = false;
if (security_inet_conn_request(sk, skb, req))
goto out_free;
req->mss = mss; req->mss = mss;
ireq->ir_rmt_port = th->source; ireq->ir_rmt_port = th->source;
ireq->ir_num = ntohs(th->dest); ireq->ir_num = ntohs(th->dest);
ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr; ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr; ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
if (security_inet_conn_request(sk, skb, req))
goto out_free;
if (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) || if (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) ||
np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo || np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) { np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {

@ -1946,4 +1946,5 @@ module_init(kcm_init);
module_exit(kcm_exit); module_exit(kcm_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("KCM (Kernel Connection Multiplexor) sockets");
MODULE_ALIAS_NETPROTO(PF_KCM); MODULE_ALIAS_NETPROTO(PF_KCM);

@ -127,8 +127,14 @@ static inline int llc_fixup_skb(struct sk_buff *skb)
skb->transport_header += llc_len; skb->transport_header += llc_len;
skb_pull(skb, llc_len); skb_pull(skb, llc_len);
if (skb->protocol == htons(ETH_P_802_2)) { if (skb->protocol == htons(ETH_P_802_2)) {
__be16 pdulen = eth_hdr(skb)->h_proto; __be16 pdulen;
s32 data_size = ntohs(pdulen) - llc_len; s32 data_size;
if (skb->mac_len < ETH_HLEN)
return 0;
pdulen = eth_hdr(skb)->h_proto;
data_size = ntohs(pdulen) - llc_len;
if (data_size < 0 || if (data_size < 0 ||
!pskb_may_pull(skb, data_size)) !pskb_may_pull(skb, data_size))

@ -153,6 +153,9 @@ int llc_sap_action_send_test_r(struct llc_sap *sap, struct sk_buff *skb)
int rc = 1; int rc = 1;
u32 data_size; u32 data_size;
if (skb->mac_len < ETH_HLEN)
return 1;
llc_pdu_decode_sa(skb, mac_da); llc_pdu_decode_sa(skb, mac_da);
llc_pdu_decode_da(skb, mac_sa); llc_pdu_decode_da(skb, mac_sa);
llc_pdu_decode_ssap(skb, &dsap); llc_pdu_decode_ssap(skb, &dsap);

@ -76,6 +76,9 @@ static int llc_station_ac_send_test_r(struct sk_buff *skb)
u32 data_size; u32 data_size;
struct sk_buff *nskb; struct sk_buff *nskb;
if (skb->mac_len < ETH_HLEN)
goto out;
/* The test request command is type U (llc_len = 3) */ /* The test request command is type U (llc_len = 3) */
data_size = ntohs(eth_hdr(skb)->h_proto) - 3; data_size = ntohs(eth_hdr(skb)->h_proto) - 3;
nskb = llc_alloc_frame(NULL, skb->dev, LLC_PDU_TYPE_U, data_size); nskb = llc_alloc_frame(NULL, skb->dev, LLC_PDU_TYPE_U, data_size);

@ -2450,3 +2450,4 @@ static void __exit ip_vs_cleanup(void)
module_init(ip_vs_init); module_init(ip_vs_init);
module_exit(ip_vs_cleanup); module_exit(ip_vs_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("IP Virtual Server");

@ -270,3 +270,4 @@ static void __exit ip_vs_dh_cleanup(void)
module_init(ip_vs_dh_init); module_init(ip_vs_dh_init);
module_exit(ip_vs_dh_cleanup); module_exit(ip_vs_dh_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs destination hashing scheduler");

@ -72,3 +72,4 @@ static void __exit ip_vs_fo_cleanup(void)
module_init(ip_vs_fo_init); module_init(ip_vs_fo_init);
module_exit(ip_vs_fo_cleanup); module_exit(ip_vs_fo_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs weighted failover scheduler");

@ -635,3 +635,4 @@ static void __exit ip_vs_ftp_exit(void)
module_init(ip_vs_ftp_init); module_init(ip_vs_ftp_init);
module_exit(ip_vs_ftp_exit); module_exit(ip_vs_ftp_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs ftp helper");

@ -632,3 +632,4 @@ static void __exit ip_vs_lblc_cleanup(void)
module_init(ip_vs_lblc_init); module_init(ip_vs_lblc_init);
module_exit(ip_vs_lblc_cleanup); module_exit(ip_vs_lblc_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs locality-based least-connection scheduler");

@ -817,3 +817,4 @@ static void __exit ip_vs_lblcr_cleanup(void)
module_init(ip_vs_lblcr_init); module_init(ip_vs_lblcr_init);
module_exit(ip_vs_lblcr_cleanup); module_exit(ip_vs_lblcr_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs locality-based least-connection with replication scheduler");

@ -86,3 +86,4 @@ static void __exit ip_vs_lc_cleanup(void)
module_init(ip_vs_lc_init); module_init(ip_vs_lc_init);
module_exit(ip_vs_lc_cleanup); module_exit(ip_vs_lc_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs least connection scheduler");

@ -136,3 +136,4 @@ static void __exit ip_vs_nq_cleanup(void)
module_init(ip_vs_nq_init); module_init(ip_vs_nq_init);
module_exit(ip_vs_nq_cleanup); module_exit(ip_vs_nq_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs never queue scheduler");

@ -79,3 +79,4 @@ static void __exit ip_vs_ovf_cleanup(void)
module_init(ip_vs_ovf_init); module_init(ip_vs_ovf_init);
module_exit(ip_vs_ovf_cleanup); module_exit(ip_vs_ovf_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs overflow connection scheduler");

@ -185,3 +185,4 @@ static void __exit ip_vs_sip_cleanup(void)
module_init(ip_vs_sip_init); module_init(ip_vs_sip_init);
module_exit(ip_vs_sip_cleanup); module_exit(ip_vs_sip_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs sip helper");

@ -122,4 +122,5 @@ static void __exit ip_vs_rr_cleanup(void)
module_init(ip_vs_rr_init); module_init(ip_vs_rr_init);
module_exit(ip_vs_rr_cleanup); module_exit(ip_vs_rr_cleanup);
MODULE_DESCRIPTION("ipvs round-robin scheduler");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

@ -137,3 +137,4 @@ static void __exit ip_vs_sed_cleanup(void)
module_init(ip_vs_sed_init); module_init(ip_vs_sed_init);
module_exit(ip_vs_sed_cleanup); module_exit(ip_vs_sed_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs shortest expected delay scheduler");

@ -376,3 +376,4 @@ static void __exit ip_vs_sh_cleanup(void)
module_init(ip_vs_sh_init); module_init(ip_vs_sh_init);
module_exit(ip_vs_sh_cleanup); module_exit(ip_vs_sh_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs source hashing scheduler");

@ -137,3 +137,4 @@ static void __exit ip_vs_twos_cleanup(void)
module_init(ip_vs_twos_init); module_init(ip_vs_twos_init);
module_exit(ip_vs_twos_cleanup); module_exit(ip_vs_twos_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs power of twos choice scheduler");

@ -109,3 +109,4 @@ static void __exit ip_vs_wlc_cleanup(void)
module_init(ip_vs_wlc_init); module_init(ip_vs_wlc_init);
module_exit(ip_vs_wlc_cleanup); module_exit(ip_vs_wlc_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ipvs weighted least connection scheduler");

Some files were not shown because too many files have changed in this diff Show More