Merge 5.10.192 into android12-5.10-lts
Changes in 5.10.192 mmc: sdhci-f-sdh30: Replace with sdhci_pltfm macsec: Fix traffic counters/statistics macsec: use DEV_STATS_INC() net/mlx5: Refactor init clock function net/mlx5: Move all internal timer metadata into a dedicated struct net/mlx5: Skip clock update work when device is in error state drm/radeon: Fix integer overflow in radeon_cs_parser_init ALSA: emu10k1: roll up loops in DSP setup code for Audigy ASoC: Intel: sof_sdw: add quirk for MTL RVP ASoC: Intel: sof_sdw: add quirk for LNL RVP PCI: tegra194: Fix possible array out of bounds access ARM: dts: imx6dl: prtrvt, prtvt7, prti6q, prtwd2: fix USB related warnings ASoC: Intel: sof_sdw: Add support for Rex soundwire iopoll: Call cpu_relax() in busy loops quota: Properly disable quotas when add_dquot_ref() fails quota: fix warning in dqgrab() dma-remap: use kvmalloc_array/kvfree for larger dma memory remap drm/amdgpu: install stub fence into potential unused fence pointers HID: add quirk for 03f0:464a HP Elite Presenter Mouse RDMA/mlx5: Return the firmware result upon destroying QP/RQ ovl: check type and offset of struct vfsmount in ovl_entry udf: Fix uninitialized array access for some pathnames fs: jfs: Fix UBSAN: array-index-out-of-bounds in dbAllocDmapLev MIPS: dec: prom: Address -Warray-bounds warning FS: JFS: Fix null-ptr-deref Read in txBegin FS: JFS: Check for read-only mounted filesystem in txBegin media: v4l2-mem2mem: add lock to protect parameter num_rdy usb: gadget: u_serial: Avoid spinlock recursion in __gs_console_push media: platform: mediatek: vpu: fix NULL ptr dereference usb: chipidea: imx: don't request QoS for imx8ulp usb: chipidea: imx: add missing USB PHY DPDM wakeup setting gfs2: Fix possible data races in gfs2_show_options() pcmcia: rsrc_nonstatic: Fix memory leak in nonstatic_release_resource_db() Bluetooth: L2CAP: Fix use-after-free Bluetooth: btusb: Add MT7922 bluetooth ID for the Asus Ally drm/amdgpu: Fix potential fence use-after-free v2 ALSA: hda/realtek: Add quirks for Unis H3C Desktop B760 & Q760 ALSA: hda: fix a possible null-pointer dereference due to data race in snd_hdac_regmap_sync() powerpc/kasan: Disable KCOV in KASAN code ring-buffer: Do not swap cpu_buffer during resize process IMA: allow/fix UML builds iio: add addac subdirectory dt-bindings: iio: add AD74413R iio: adc: stx104: Utilize iomap interface iio: adc: stx104: Implement and utilize register structures iio: addac: stx104: Fix race condition for stx104_write_raw() iio: addac: stx104: Fix race condition when converting analog-to-digital bus: mhi: Add MHI PCI support for WWAN modems bus: mhi: Add MMIO region length to controller structure bus: mhi: Move host MHI code to "host" directory bus: mhi: host: Range check CHDBOFF and ERDBOFF irqchip/mips-gic: Get rid of the reliance on irq_cpu_online() irqchip/mips-gic: Use raw spinlock for gic_lock usb: gadget: udc: core: Introduce check_config to verify USB configuration usb: cdns3: allocate TX FIFO size according to composite EP number usb: cdns3: fix NCM gadget RX speed 20x slow than expection at iMX8QM USB: dwc3: qcom: fix NULL-deref on suspend mmc: bcm2835: fix deferred probing mmc: sunxi: fix deferred probing mmc: core: add devm_mmc_alloc_host mmc: meson-gx: use devm_mmc_alloc_host mmc: meson-gx: fix deferred probing tracing/probes: Have process_fetch_insn() take a void * instead of pt_regs tracing/probes: Fix to update dynamic data counter if fetcharg uses it virtio-mmio: Use to_virtio_mmio_device() to simply code virtio-mmio: don't break lifecycle of vm_dev i2c: bcm-iproc: Fix bcm_iproc_i2c_isr deadlock issue fbdev: mmp: fix value check in mmphw_probe() powerpc/rtas_flash: allow user copy to flash block cache objects tty: n_gsm: fix the UAF caused by race condition in gsm_cleanup_mux tty: serial: fsl_lpuart: Clear the error flags by writing 1 for lpuart32 platforms btrfs: fix BUG_ON condition in btrfs_cancel_balance i2c: designware: Handle invalid SMBus block data response length value net: xfrm: Fix xfrm_address_filter OOB read net: af_key: fix sadb_x_filter validation net: xfrm: Amend XFRMA_SEC_CTX nla_policy structure xfrm: fix slab-use-after-free in decode_session6 ip6_vti: fix slab-use-after-free in decode_session6 ip_vti: fix potential slab-use-after-free in decode_session6 xfrm: add NULL check in xfrm_update_ae_params xfrm: add forgotten nla_policy for XFRMA_MTIMER_THRESH selftests: mirror_gre_changes: Tighten up the TTL test match drm/panel: simple: Fix AUO G121EAN01 panel timings according to the docs ipvs: fix racy memcpy in proc_do_sync_threshold netfilter: nft_dynset: disallow object maps net: phy: broadcom: stub c45 read/write for 54810 team: Fix incorrect deletion of ETH_P_8021AD protocol vid from slaves i40e: fix misleading debug logs net: dsa: mv88e6xxx: Wait for EEPROM done before HW reset sock: Fix misuse of sk_under_memory_pressure() net: do not allow gso_size to be set to GSO_BY_FRAGS bus: ti-sysc: Flush posted write on enable before reset arm64: dts: rockchip: fix supplies on rk3399-rock-pi-4 arm64: dts: rockchip: use USB host by default on rk3399-rock-pi-4 arm64: dts: rockchip: add ES8316 codec for ROCK Pi 4 arm64: dts: rockchip: add SPDIF node for ROCK Pi 4 arm64: dts: rockchip: fix regulator name on rk3399-rock-4 arm64: dts: rockchip: sort nodes/properties on rk3399-rock-4 arm64: dts: rockchip: Disable HS400 for eMMC on ROCK Pi 4 ASoC: rt5665: add missed regulator_bulk_disable ASoC: meson: axg-tdm-formatter: fix channel slot allocation ALSA: hda/realtek - Remodified 3k pull low procedure serial: 8250: Fix oops for port->pm on uart_change_pm() ALSA: usb-audio: Add support for Mythware XA001AU capture and playback interfaces. cifs: Release folio lock on fscache read hit. mmc: wbsd: fix double mmc_free_host() in wbsd_init() mmc: block: Fix in_flight[issue_type] value error netfilter: set default timeout to 3 secs for sctp shutdown send and recv state af_unix: Fix null-ptr-deref in unix_stream_sendpage(). virtio-net: set queues after driver_ok net: fix the RTO timer retransmitting skb every 1ms if linear option is enabled mmc: f-sdh30: fix order of function calls in sdhci_f_sdh30_remove x86/cpu: Fix __x86_return_thunk symbol type x86/cpu: Fix up srso_safe_ret() and __x86_return_thunk() x86/alternative: Make custom return thunk unconditional objtool: Add frame-pointer-specific function ignore x86/ibt: Add ANNOTATE_NOENDBR x86/cpu: Clean up SRSO return thunk mess x86/cpu: Rename original retbleed methods x86/cpu: Rename srso_(.*)_alias to srso_alias_\1 x86/cpu: Cleanup the untrain mess x86/srso: Explain the untraining sequences a bit more x86/static_call: Fix __static_call_fixup() x86/retpoline: Don't clobber RFLAGS during srso_safe_ret() x86/CPU/AMD: Fix the DIV(0) initial fix attempt x86/srso: Disable the mitigation on unaffected configurations x86/retpoline,kprobes: Fix position of thunk sections with CONFIG_LTO_CLANG objtool/x86: Fixup frame-pointer vs rethunk x86/srso: Correct the mitigation status when SMT is disabled Linux 5.10.192 Change-Id: Id6dcc6748bce39baa640b8f0c3764d1d95643016 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -124,8 +124,8 @@ sequence.
|
|||||||
To ensure the safety of this mitigation, the kernel must ensure that the
|
To ensure the safety of this mitigation, the kernel must ensure that the
|
||||||
safe return sequence is itself free from attacker interference. In Zen3
|
safe return sequence is itself free from attacker interference. In Zen3
|
||||||
and Zen4, this is accomplished by creating a BTB alias between the
|
and Zen4, this is accomplished by creating a BTB alias between the
|
||||||
untraining function srso_untrain_ret_alias() and the safe return
|
untraining function srso_alias_untrain_ret() and the safe return
|
||||||
function srso_safe_ret_alias() which results in evicting a potentially
|
function srso_alias_safe_ret() which results in evicting a potentially
|
||||||
poisoned BTB entry and using that safe one for all function returns.
|
poisoned BTB entry and using that safe one for all function returns.
|
||||||
|
|
||||||
In older Zen1 and Zen2, this is accomplished using a reinterpretation
|
In older Zen1 and Zen2, this is accomplished using a reinterpretation
|
||||||
|
|||||||
@@ -0,0 +1,158 @@
|
|||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/iio/addac/adi,ad74413r.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: Analog Devices AD74412R/AD74413R device
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- Cosmin Tanislav <cosmin.tanislav@analog.com>
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The AD74412R and AD74413R are quad-channel software configurable input/output
|
||||||
|
solutions for building and process control applications. They contain
|
||||||
|
functionality for analog output, analog input, digital input, resistance
|
||||||
|
temperature detector, and thermocouple measurements integrated
|
||||||
|
into a single chip solution with an SPI interface.
|
||||||
|
The devices feature a 16-bit ADC and four configurable 13-bit DACs to provide
|
||||||
|
four configurable input/output channels and a suite of diagnostic functions.
|
||||||
|
The AD74413R differentiates itself from the AD74412R by being HART-compatible.
|
||||||
|
https://www.analog.com/en/products/ad74412r.html
|
||||||
|
https://www.analog.com/en/products/ad74413r.html
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
enum:
|
||||||
|
- adi,ad74412r
|
||||||
|
- adi,ad74413r
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
'#address-cells':
|
||||||
|
const: 1
|
||||||
|
|
||||||
|
'#size-cells':
|
||||||
|
const: 0
|
||||||
|
|
||||||
|
spi-max-frequency:
|
||||||
|
maximum: 1000000
|
||||||
|
|
||||||
|
spi-cpol: true
|
||||||
|
|
||||||
|
interrupts:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
refin-supply: true
|
||||||
|
|
||||||
|
shunt-resistor-micro-ohms:
|
||||||
|
description:
|
||||||
|
Shunt (sense) resistor value in micro-Ohms.
|
||||||
|
default: 100000000
|
||||||
|
|
||||||
|
required:
|
||||||
|
- compatible
|
||||||
|
- reg
|
||||||
|
- spi-max-frequency
|
||||||
|
- spi-cpol
|
||||||
|
- refin-supply
|
||||||
|
|
||||||
|
additionalProperties: false
|
||||||
|
|
||||||
|
patternProperties:
|
||||||
|
"^channel@[0-3]$":
|
||||||
|
type: object
|
||||||
|
description: Represents the external channels which are connected to the device.
|
||||||
|
|
||||||
|
properties:
|
||||||
|
reg:
|
||||||
|
description: |
|
||||||
|
The channel number. It can have up to 4 channels numbered from 0 to 3.
|
||||||
|
minimum: 0
|
||||||
|
maximum: 3
|
||||||
|
|
||||||
|
adi,ch-func:
|
||||||
|
$ref: /schemas/types.yaml#/definitions/uint32
|
||||||
|
description: |
|
||||||
|
Channel function.
|
||||||
|
HART functions are not supported on AD74412R.
|
||||||
|
0 - CH_FUNC_HIGH_IMPEDANCE
|
||||||
|
1 - CH_FUNC_VOLTAGE_OUTPUT
|
||||||
|
2 - CH_FUNC_CURRENT_OUTPUT
|
||||||
|
3 - CH_FUNC_VOLTAGE_INPUT
|
||||||
|
4 - CH_FUNC_CURRENT_INPUT_EXT_POWER
|
||||||
|
5 - CH_FUNC_CURRENT_INPUT_LOOP_POWER
|
||||||
|
6 - CH_FUNC_RESISTANCE_INPUT
|
||||||
|
7 - CH_FUNC_DIGITAL_INPUT_LOGIC
|
||||||
|
8 - CH_FUNC_DIGITAL_INPUT_LOOP_POWER
|
||||||
|
9 - CH_FUNC_CURRENT_INPUT_EXT_POWER_HART
|
||||||
|
10 - CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART
|
||||||
|
minimum: 0
|
||||||
|
maximum: 10
|
||||||
|
default: 0
|
||||||
|
|
||||||
|
adi,gpo-comparator:
|
||||||
|
type: boolean
|
||||||
|
description: |
|
||||||
|
Whether to configure GPO as a comparator or not.
|
||||||
|
When not configured as a comparator, the GPO will be treated as an
|
||||||
|
output-only GPIO.
|
||||||
|
|
||||||
|
required:
|
||||||
|
- reg
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
#include <dt-bindings/gpio/gpio.h>
|
||||||
|
#include <dt-bindings/interrupt-controller/irq.h>
|
||||||
|
#include <dt-bindings/iio/addac/adi,ad74413r.h>
|
||||||
|
|
||||||
|
spi {
|
||||||
|
#address-cells = <1>;
|
||||||
|
#size-cells = <0>;
|
||||||
|
|
||||||
|
cs-gpios = <&gpio 17 GPIO_ACTIVE_LOW>;
|
||||||
|
status = "okay";
|
||||||
|
|
||||||
|
ad74413r@0 {
|
||||||
|
compatible = "adi,ad74413r";
|
||||||
|
reg = <0>;
|
||||||
|
spi-max-frequency = <1000000>;
|
||||||
|
spi-cpol;
|
||||||
|
|
||||||
|
#address-cells = <1>;
|
||||||
|
#size-cells = <0>;
|
||||||
|
|
||||||
|
interrupt-parent = <&gpio>;
|
||||||
|
interrupts = <26 IRQ_TYPE_EDGE_FALLING>;
|
||||||
|
|
||||||
|
refin-supply = <&ad74413r_refin>;
|
||||||
|
|
||||||
|
channel@0 {
|
||||||
|
reg = <0>;
|
||||||
|
|
||||||
|
adi,ch-func = <CH_FUNC_VOLTAGE_OUTPUT>;
|
||||||
|
};
|
||||||
|
|
||||||
|
channel@1 {
|
||||||
|
reg = <1>;
|
||||||
|
|
||||||
|
adi,ch-func = <CH_FUNC_CURRENT_OUTPUT>;
|
||||||
|
};
|
||||||
|
|
||||||
|
channel@2 {
|
||||||
|
reg = <2>;
|
||||||
|
|
||||||
|
adi,ch-func = <CH_FUNC_DIGITAL_INPUT_LOGIC>;
|
||||||
|
adi,gpo-comparator;
|
||||||
|
};
|
||||||
|
|
||||||
|
channel@3 {
|
||||||
|
reg = <3>;
|
||||||
|
|
||||||
|
adi,ch-func = <CH_FUNC_CURRENT_INPUT_EXT_POWER>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
...
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 10
|
PATCHLEVEL = 10
|
||||||
SUBLEVEL = 191
|
SUBLEVEL = 192
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Dare mighty things
|
NAME = Dare mighty things
|
||||||
|
|
||||||
|
|||||||
@@ -126,6 +126,10 @@
|
|||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&usbotg {
|
||||||
|
disable-over-current;
|
||||||
|
};
|
||||||
|
|
||||||
&vpu {
|
&vpu {
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -69,6 +69,7 @@
|
|||||||
vbus-supply = <®_usb_h1_vbus>;
|
vbus-supply = <®_usb_h1_vbus>;
|
||||||
phy_type = "utmi";
|
phy_type = "utmi";
|
||||||
dr_mode = "host";
|
dr_mode = "host";
|
||||||
|
disable-over-current;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -78,10 +79,18 @@
|
|||||||
pinctrl-0 = <&pinctrl_usbotg>;
|
pinctrl-0 = <&pinctrl_usbotg>;
|
||||||
phy_type = "utmi";
|
phy_type = "utmi";
|
||||||
dr_mode = "host";
|
dr_mode = "host";
|
||||||
disable-over-current;
|
over-current-active-low;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&usbphynop1 {
|
||||||
|
status = "disabled";
|
||||||
|
};
|
||||||
|
|
||||||
|
&usbphynop2 {
|
||||||
|
status = "disabled";
|
||||||
|
};
|
||||||
|
|
||||||
&usdhc1 {
|
&usdhc1 {
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_usdhc1>;
|
pinctrl-0 = <&pinctrl_usdhc1>;
|
||||||
|
|||||||
@@ -31,6 +31,40 @@
|
|||||||
reset-gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_LOW>;
|
reset-gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_LOW>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
sound {
|
||||||
|
compatible = "audio-graph-card";
|
||||||
|
label = "Analog";
|
||||||
|
dais = <&i2s0_p0>;
|
||||||
|
};
|
||||||
|
|
||||||
|
sound-dit {
|
||||||
|
compatible = "audio-graph-card";
|
||||||
|
label = "SPDIF";
|
||||||
|
dais = <&spdif_p0>;
|
||||||
|
};
|
||||||
|
|
||||||
|
spdif-dit {
|
||||||
|
compatible = "linux,spdif-dit";
|
||||||
|
#sound-dai-cells = <0>;
|
||||||
|
|
||||||
|
port {
|
||||||
|
dit_p0_0: endpoint {
|
||||||
|
remote-endpoint = <&spdif_p0_0>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
vbus_typec: vbus-typec-regulator {
|
||||||
|
compatible = "regulator-fixed";
|
||||||
|
enable-active-high;
|
||||||
|
gpio = <&gpio1 RK_PA3 GPIO_ACTIVE_HIGH>;
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&vcc5v0_typec_en>;
|
||||||
|
regulator-name = "vbus_typec";
|
||||||
|
regulator-always-on;
|
||||||
|
vin-supply = <&vcc5v0_sys>;
|
||||||
|
};
|
||||||
|
|
||||||
vcc12v_dcin: dc-12v {
|
vcc12v_dcin: dc-12v {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
regulator-name = "vcc12v_dcin";
|
regulator-name = "vcc12v_dcin";
|
||||||
@@ -40,23 +74,13 @@
|
|||||||
regulator-max-microvolt = <12000000>;
|
regulator-max-microvolt = <12000000>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc5v0_sys: vcc-sys {
|
vcc3v3_lan: vcc3v3-lan-regulator {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
regulator-name = "vcc5v0_sys";
|
regulator-name = "vcc3v3_lan";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <5000000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
regulator-max-microvolt = <5000000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
vin-supply = <&vcc12v_dcin>;
|
|
||||||
};
|
|
||||||
|
|
||||||
vcc_0v9: vcc-0v9 {
|
|
||||||
compatible = "regulator-fixed";
|
|
||||||
regulator-name = "vcc_0v9";
|
|
||||||
regulator-always-on;
|
|
||||||
regulator-boot-on;
|
|
||||||
regulator-min-microvolt = <900000>;
|
|
||||||
regulator-max-microvolt = <900000>;
|
|
||||||
vin-supply = <&vcc3v3_sys>;
|
vin-supply = <&vcc3v3_sys>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -93,28 +117,24 @@
|
|||||||
vin-supply = <&vcc5v0_sys>;
|
vin-supply = <&vcc5v0_sys>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc5v0_typec: vcc5v0-typec-regulator {
|
vcc5v0_sys: vcc-sys {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
enable-active-high;
|
regulator-name = "vcc5v0_sys";
|
||||||
gpio = <&gpio1 RK_PA3 GPIO_ACTIVE_HIGH>;
|
|
||||||
pinctrl-names = "default";
|
|
||||||
pinctrl-0 = <&vcc5v0_typec_en>;
|
|
||||||
regulator-name = "vcc5v0_typec";
|
|
||||||
regulator-always-on;
|
|
||||||
vin-supply = <&vcc5v0_sys>;
|
|
||||||
};
|
|
||||||
|
|
||||||
vcc_lan: vcc3v3-phy-regulator {
|
|
||||||
compatible = "regulator-fixed";
|
|
||||||
regulator-name = "vcc_lan";
|
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <3300000>;
|
regulator-min-microvolt = <5000000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <5000000>;
|
||||||
|
vin-supply = <&vcc12v_dcin>;
|
||||||
|
};
|
||||||
|
|
||||||
regulator-state-mem {
|
vcc_0v9: vcc-0v9 {
|
||||||
regulator-off-in-suspend;
|
compatible = "regulator-fixed";
|
||||||
};
|
regulator-name = "vcc_0v9";
|
||||||
|
regulator-always-on;
|
||||||
|
regulator-boot-on;
|
||||||
|
regulator-min-microvolt = <900000>;
|
||||||
|
regulator-max-microvolt = <900000>;
|
||||||
|
vin-supply = <&vcc3v3_sys>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vdd_log: vdd-log {
|
vdd_log: vdd-log {
|
||||||
@@ -161,7 +181,7 @@
|
|||||||
assigned-clocks = <&cru SCLK_RMII_SRC>;
|
assigned-clocks = <&cru SCLK_RMII_SRC>;
|
||||||
assigned-clock-parents = <&clkin_gmac>;
|
assigned-clock-parents = <&clkin_gmac>;
|
||||||
clock_in_out = "input";
|
clock_in_out = "input";
|
||||||
phy-supply = <&vcc_lan>;
|
phy-supply = <&vcc3v3_lan>;
|
||||||
phy-mode = "rgmii";
|
phy-mode = "rgmii";
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&rgmii_pins>;
|
pinctrl-0 = <&rgmii_pins>;
|
||||||
@@ -266,8 +286,8 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc1v8_codec: LDO_REG1 {
|
vcca1v8_codec: LDO_REG1 {
|
||||||
regulator-name = "vcc1v8_codec";
|
regulator-name = "vcca1v8_codec";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
@@ -277,8 +297,8 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc1v8_hdmi: LDO_REG2 {
|
vcca1v8_hdmi: LDO_REG2 {
|
||||||
regulator-name = "vcc1v8_hdmi";
|
regulator-name = "vcca1v8_hdmi";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
@@ -335,8 +355,8 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc0v9_hdmi: LDO_REG7 {
|
vcca0v9_hdmi: LDO_REG7 {
|
||||||
regulator-name = "vcc0v9_hdmi";
|
regulator-name = "vcca0v9_hdmi";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <900000>;
|
regulator-min-microvolt = <900000>;
|
||||||
@@ -362,8 +382,6 @@
|
|||||||
regulator-name = "vcc_cam";
|
regulator-name = "vcc_cam";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <3300000>;
|
|
||||||
regulator-max-microvolt = <3300000>;
|
|
||||||
regulator-state-mem {
|
regulator-state-mem {
|
||||||
regulator-off-in-suspend;
|
regulator-off-in-suspend;
|
||||||
};
|
};
|
||||||
@@ -373,8 +391,6 @@
|
|||||||
regulator-name = "vcc_mipi";
|
regulator-name = "vcc_mipi";
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <3300000>;
|
|
||||||
regulator-max-microvolt = <3300000>;
|
|
||||||
regulator-state-mem {
|
regulator-state-mem {
|
||||||
regulator-off-in-suspend;
|
regulator-off-in-suspend;
|
||||||
};
|
};
|
||||||
@@ -425,6 +441,20 @@
|
|||||||
i2c-scl-rising-time-ns = <300>;
|
i2c-scl-rising-time-ns = <300>;
|
||||||
i2c-scl-falling-time-ns = <15>;
|
i2c-scl-falling-time-ns = <15>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
|
es8316: codec@11 {
|
||||||
|
compatible = "everest,es8316";
|
||||||
|
reg = <0x11>;
|
||||||
|
clocks = <&cru SCLK_I2S_8CH_OUT>;
|
||||||
|
clock-names = "mclk";
|
||||||
|
#sound-dai-cells = <0>;
|
||||||
|
|
||||||
|
port {
|
||||||
|
es8316_p0_0: endpoint {
|
||||||
|
remote-endpoint = <&i2s0_p0_0>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&i2c3 {
|
&i2c3 {
|
||||||
@@ -443,6 +473,14 @@
|
|||||||
rockchip,playback-channels = <8>;
|
rockchip,playback-channels = <8>;
|
||||||
rockchip,capture-channels = <8>;
|
rockchip,capture-channels = <8>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
|
i2s0_p0: port {
|
||||||
|
i2s0_p0_0: endpoint {
|
||||||
|
dai-format = "i2s";
|
||||||
|
mclk-fs = <256>;
|
||||||
|
remote-endpoint = <&es8316_p0_0>;
|
||||||
|
};
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&i2s1 {
|
&i2s1 {
|
||||||
@@ -455,21 +493,10 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&io_domains {
|
&io_domains {
|
||||||
status = "okay";
|
audio-supply = <&vcca1v8_codec>;
|
||||||
|
|
||||||
bt656-supply = <&vcc_3v0>;
|
bt656-supply = <&vcc_3v0>;
|
||||||
audio-supply = <&vcc1v8_codec>;
|
|
||||||
sdmmc-supply = <&vcc_sdio>;
|
|
||||||
gpio1830-supply = <&vcc_3v0>;
|
gpio1830-supply = <&vcc_3v0>;
|
||||||
};
|
sdmmc-supply = <&vcc_sdio>;
|
||||||
|
|
||||||
&pmu_io_domains {
|
|
||||||
status = "okay";
|
|
||||||
|
|
||||||
pmu1830-supply = <&vcc_3v0>;
|
|
||||||
};
|
|
||||||
|
|
||||||
&pcie_phy {
|
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -485,6 +512,10 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&pcie_phy {
|
||||||
|
status = "okay";
|
||||||
|
};
|
||||||
|
|
||||||
&pinctrl {
|
&pinctrl {
|
||||||
bt {
|
bt {
|
||||||
bt_enable_h: bt-enable-h {
|
bt_enable_h: bt-enable-h {
|
||||||
@@ -506,6 +537,20 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
pmic {
|
||||||
|
pmic_int_l: pmic-int-l {
|
||||||
|
rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
|
||||||
|
};
|
||||||
|
|
||||||
|
vsel1_pin: vsel1-pin {
|
||||||
|
rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>;
|
||||||
|
};
|
||||||
|
|
||||||
|
vsel2_pin: vsel2-pin {
|
||||||
|
rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
sdio0 {
|
sdio0 {
|
||||||
sdio0_bus4: sdio0-bus4 {
|
sdio0_bus4: sdio0-bus4 {
|
||||||
rockchip,pins = <2 RK_PC4 1 &pcfg_pull_up_20ma>,
|
rockchip,pins = <2 RK_PC4 1 &pcfg_pull_up_20ma>,
|
||||||
@@ -523,20 +568,6 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
pmic {
|
|
||||||
pmic_int_l: pmic-int-l {
|
|
||||||
rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
|
|
||||||
};
|
|
||||||
|
|
||||||
vsel1_pin: vsel1-pin {
|
|
||||||
rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>;
|
|
||||||
};
|
|
||||||
|
|
||||||
vsel2_pin: vsel2-pin {
|
|
||||||
rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
usb-typec {
|
usb-typec {
|
||||||
vcc5v0_typec_en: vcc5v0-typec-en {
|
vcc5v0_typec_en: vcc5v0-typec-en {
|
||||||
rockchip,pins = <1 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
|
rockchip,pins = <1 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
|
||||||
@@ -560,6 +591,11 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&pmu_io_domains {
|
||||||
|
pmu1830-supply = <&vcc_3v0>;
|
||||||
|
status = "okay";
|
||||||
|
};
|
||||||
|
|
||||||
&pwm2 {
|
&pwm2 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
@@ -570,6 +606,14 @@
|
|||||||
vref-supply = <&vcc_1v8>;
|
vref-supply = <&vcc_1v8>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&sdhci {
|
||||||
|
max-frequency = <150000000>;
|
||||||
|
bus-width = <8>;
|
||||||
|
mmc-hs200-1_8v;
|
||||||
|
non-removable;
|
||||||
|
status = "okay";
|
||||||
|
};
|
||||||
|
|
||||||
&sdio0 {
|
&sdio0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
@@ -597,12 +641,13 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
&sdhci {
|
&spdif {
|
||||||
bus-width = <8>;
|
|
||||||
mmc-hs400-1_8v;
|
spdif_p0: port {
|
||||||
mmc-hs400-enhanced-strobe;
|
spdif_p0_0: endpoint {
|
||||||
non-removable;
|
remote-endpoint = <&dit_p0_0>;
|
||||||
status = "okay";
|
};
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&tcphy0 {
|
&tcphy0 {
|
||||||
@@ -677,15 +722,15 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
&usbdrd_dwc3_0 {
|
|
||||||
status = "okay";
|
|
||||||
dr_mode = "otg";
|
|
||||||
};
|
|
||||||
|
|
||||||
&usbdrd3_1 {
|
&usbdrd3_1 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
&usbdrd_dwc3_0 {
|
||||||
|
status = "okay";
|
||||||
|
dr_mode = "host";
|
||||||
|
};
|
||||||
|
|
||||||
&usbdrd_dwc3_1 {
|
&usbdrd_dwc3_1 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
dr_mode = "host";
|
dr_mode = "host";
|
||||||
|
|||||||
@@ -70,7 +70,7 @@ static inline bool prom_is_rex(u32 magic)
|
|||||||
*/
|
*/
|
||||||
typedef struct {
|
typedef struct {
|
||||||
int pagesize;
|
int pagesize;
|
||||||
unsigned char bitmap[0];
|
unsigned char bitmap[];
|
||||||
} memmap;
|
} memmap;
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -710,9 +710,9 @@ static int __init rtas_flash_init(void)
|
|||||||
if (!rtas_validate_flash_data.buf)
|
if (!rtas_validate_flash_data.buf)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
flash_block_cache = kmem_cache_create("rtas_flash_cache",
|
flash_block_cache = kmem_cache_create_usercopy("rtas_flash_cache",
|
||||||
RTAS_BLK_SIZE, RTAS_BLK_SIZE, 0,
|
RTAS_BLK_SIZE, RTAS_BLK_SIZE,
|
||||||
NULL);
|
0, 0, RTAS_BLK_SIZE, NULL);
|
||||||
if (!flash_block_cache) {
|
if (!flash_block_cache) {
|
||||||
printk(KERN_ERR "%s: failed to create block cache\n",
|
printk(KERN_ERR "%s: failed to create block cache\n",
|
||||||
__func__);
|
__func__);
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
KASAN_SANITIZE := n
|
KASAN_SANITIZE := n
|
||||||
|
KCOV_INSTRUMENT := n
|
||||||
|
|
||||||
obj-$(CONFIG_PPC32) += kasan_init_32.o
|
obj-$(CONFIG_PPC32) += kasan_init_32.o
|
||||||
obj-$(CONFIG_PPC_8xx) += 8xx.o
|
obj-$(CONFIG_PPC_8xx) += 8xx.o
|
||||||
|
|||||||
@@ -78,6 +78,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
|
|||||||
static __always_inline void arch_exit_to_user_mode(void)
|
static __always_inline void arch_exit_to_user_mode(void)
|
||||||
{
|
{
|
||||||
mds_user_clear_cpu_buffers();
|
mds_user_clear_cpu_buffers();
|
||||||
|
amd_clear_divider();
|
||||||
}
|
}
|
||||||
#define arch_exit_to_user_mode arch_exit_to_user_mode
|
#define arch_exit_to_user_mode arch_exit_to_user_mode
|
||||||
|
|
||||||
|
|||||||
@@ -156,9 +156,9 @@
|
|||||||
.endm
|
.endm
|
||||||
|
|
||||||
#ifdef CONFIG_CPU_UNRET_ENTRY
|
#ifdef CONFIG_CPU_UNRET_ENTRY
|
||||||
#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret"
|
#define CALL_UNTRAIN_RET "call entry_untrain_ret"
|
||||||
#else
|
#else
|
||||||
#define CALL_ZEN_UNTRAIN_RET ""
|
#define CALL_UNTRAIN_RET ""
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -166,7 +166,7 @@
|
|||||||
* return thunk isn't mapped into the userspace tables (then again, AMD
|
* return thunk isn't mapped into the userspace tables (then again, AMD
|
||||||
* typically has NO_MELTDOWN).
|
* typically has NO_MELTDOWN).
|
||||||
*
|
*
|
||||||
* While zen_untrain_ret() doesn't clobber anything but requires stack,
|
* While retbleed_untrain_ret() doesn't clobber anything but requires stack,
|
||||||
* entry_ibpb() will clobber AX, CX, DX.
|
* entry_ibpb() will clobber AX, CX, DX.
|
||||||
*
|
*
|
||||||
* As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
|
* As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
|
||||||
@@ -177,14 +177,9 @@
|
|||||||
defined(CONFIG_CPU_SRSO)
|
defined(CONFIG_CPU_SRSO)
|
||||||
ANNOTATE_UNRET_END
|
ANNOTATE_UNRET_END
|
||||||
ALTERNATIVE_2 "", \
|
ALTERNATIVE_2 "", \
|
||||||
CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
|
CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
|
||||||
"call entry_ibpb", X86_FEATURE_ENTRY_IBPB
|
"call entry_ibpb", X86_FEATURE_ENTRY_IBPB
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_CPU_SRSO
|
|
||||||
ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \
|
|
||||||
"call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
|
|
||||||
#endif
|
|
||||||
.endm
|
.endm
|
||||||
|
|
||||||
#else /* __ASSEMBLY__ */
|
#else /* __ASSEMBLY__ */
|
||||||
@@ -195,10 +190,21 @@
|
|||||||
_ASM_PTR " 999b\n\t" \
|
_ASM_PTR " 999b\n\t" \
|
||||||
".popsection\n\t"
|
".popsection\n\t"
|
||||||
|
|
||||||
|
#ifdef CONFIG_RETHUNK
|
||||||
extern void __x86_return_thunk(void);
|
extern void __x86_return_thunk(void);
|
||||||
extern void zen_untrain_ret(void);
|
#else
|
||||||
|
static inline void __x86_return_thunk(void) {}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
extern void retbleed_return_thunk(void);
|
||||||
|
extern void srso_return_thunk(void);
|
||||||
|
extern void srso_alias_return_thunk(void);
|
||||||
|
|
||||||
|
extern void retbleed_untrain_ret(void);
|
||||||
extern void srso_untrain_ret(void);
|
extern void srso_untrain_ret(void);
|
||||||
extern void srso_untrain_ret_alias(void);
|
extern void srso_alias_untrain_ret(void);
|
||||||
|
|
||||||
|
extern void entry_untrain_ret(void);
|
||||||
extern void entry_ibpb(void);
|
extern void entry_ibpb(void);
|
||||||
|
|
||||||
#ifdef CONFIG_RETPOLINE
|
#ifdef CONFIG_RETPOLINE
|
||||||
|
|||||||
@@ -1332,3 +1332,4 @@ void noinstr amd_clear_divider(void)
|
|||||||
asm volatile(ALTERNATIVE("", "div %2\n\t", X86_BUG_DIV0)
|
asm volatile(ALTERNATIVE("", "div %2\n\t", X86_BUG_DIV0)
|
||||||
:: "a" (0), "d" (0), "r" (1));
|
:: "a" (0), "d" (0), "r" (1));
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(amd_clear_divider);
|
||||||
|
|||||||
@@ -61,6 +61,8 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
|
|||||||
|
|
||||||
static DEFINE_MUTEX(spec_ctrl_mutex);
|
static DEFINE_MUTEX(spec_ctrl_mutex);
|
||||||
|
|
||||||
|
void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
|
||||||
|
|
||||||
/* Update SPEC_CTRL MSR and its cached copy unconditionally */
|
/* Update SPEC_CTRL MSR and its cached copy unconditionally */
|
||||||
static void update_spec_ctrl(u64 val)
|
static void update_spec_ctrl(u64 val)
|
||||||
{
|
{
|
||||||
@@ -155,8 +157,13 @@ void __init cpu_select_mitigations(void)
|
|||||||
l1tf_select_mitigation();
|
l1tf_select_mitigation();
|
||||||
md_clear_select_mitigation();
|
md_clear_select_mitigation();
|
||||||
srbds_select_mitigation();
|
srbds_select_mitigation();
|
||||||
gds_select_mitigation();
|
|
||||||
|
/*
|
||||||
|
* srso_select_mitigation() depends and must run after
|
||||||
|
* retbleed_select_mitigation().
|
||||||
|
*/
|
||||||
srso_select_mitigation();
|
srso_select_mitigation();
|
||||||
|
gds_select_mitigation();
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -976,6 +983,9 @@ do_cmd_auto:
|
|||||||
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
|
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
|
||||||
setup_force_cpu_cap(X86_FEATURE_UNRET);
|
setup_force_cpu_cap(X86_FEATURE_UNRET);
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_RETHUNK))
|
||||||
|
x86_return_thunk = retbleed_return_thunk;
|
||||||
|
|
||||||
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
|
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
|
||||||
boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
|
boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
|
||||||
pr_err(RETBLEED_UNTRAIN_MSG);
|
pr_err(RETBLEED_UNTRAIN_MSG);
|
||||||
@@ -2318,9 +2328,10 @@ static void __init srso_select_mitigation(void)
|
|||||||
* Zen1/2 with SMT off aren't vulnerable after the right
|
* Zen1/2 with SMT off aren't vulnerable after the right
|
||||||
* IBPB microcode has been applied.
|
* IBPB microcode has been applied.
|
||||||
*/
|
*/
|
||||||
if ((boot_cpu_data.x86 < 0x19) &&
|
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
|
||||||
(!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED)))
|
|
||||||
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
|
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
|
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
|
||||||
@@ -2349,11 +2360,15 @@ static void __init srso_select_mitigation(void)
|
|||||||
* like ftrace, static_call, etc.
|
* like ftrace, static_call, etc.
|
||||||
*/
|
*/
|
||||||
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
|
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
|
||||||
|
setup_force_cpu_cap(X86_FEATURE_UNRET);
|
||||||
|
|
||||||
if (boot_cpu_data.x86 == 0x19)
|
if (boot_cpu_data.x86 == 0x19) {
|
||||||
setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
|
setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
|
||||||
else
|
x86_return_thunk = srso_alias_return_thunk;
|
||||||
|
} else {
|
||||||
setup_force_cpu_cap(X86_FEATURE_SRSO);
|
setup_force_cpu_cap(X86_FEATURE_SRSO);
|
||||||
|
x86_return_thunk = srso_return_thunk;
|
||||||
|
}
|
||||||
srso_mitigation = SRSO_MITIGATION_SAFE_RET;
|
srso_mitigation = SRSO_MITIGATION_SAFE_RET;
|
||||||
} else {
|
} else {
|
||||||
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
|
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
|
||||||
@@ -2602,6 +2617,9 @@ static ssize_t gds_show_state(char *buf)
|
|||||||
|
|
||||||
static ssize_t srso_show_state(char *buf)
|
static ssize_t srso_show_state(char *buf)
|
||||||
{
|
{
|
||||||
|
if (boot_cpu_has(X86_FEATURE_SRSO_NO))
|
||||||
|
return sysfs_emit(buf, "Mitigation: SMT disabled\n");
|
||||||
|
|
||||||
return sysfs_emit(buf, "%s%s\n",
|
return sysfs_emit(buf, "%s%s\n",
|
||||||
srso_strings[srso_mitigation],
|
srso_strings[srso_mitigation],
|
||||||
(cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
|
(cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
|
||||||
|
|||||||
@@ -123,6 +123,19 @@ EXPORT_SYMBOL_GPL(arch_static_call_transform);
|
|||||||
*/
|
*/
|
||||||
bool __static_call_fixup(void *tramp, u8 op, void *dest)
|
bool __static_call_fixup(void *tramp, u8 op, void *dest)
|
||||||
{
|
{
|
||||||
|
unsigned long addr = (unsigned long)tramp;
|
||||||
|
/*
|
||||||
|
* Not all .return_sites are a static_call trampoline (most are not).
|
||||||
|
* Check if the 3 bytes after the return are still kernel text, if not,
|
||||||
|
* then this definitely is not a trampoline and we need not worry
|
||||||
|
* further.
|
||||||
|
*
|
||||||
|
* This avoids the memcmp() below tripping over pagefaults etc..
|
||||||
|
*/
|
||||||
|
if (((addr >> PAGE_SHIFT) != ((addr + 7) >> PAGE_SHIFT)) &&
|
||||||
|
!kernel_text_address(addr + 7))
|
||||||
|
return false;
|
||||||
|
|
||||||
if (memcmp(tramp+5, tramp_ud, 3)) {
|
if (memcmp(tramp+5, tramp_ud, 3)) {
|
||||||
/* Not a trampoline site, not our problem. */
|
/* Not a trampoline site, not our problem. */
|
||||||
return false;
|
return false;
|
||||||
|
|||||||
@@ -198,8 +198,6 @@ DEFINE_IDTENTRY(exc_divide_error)
|
|||||||
{
|
{
|
||||||
do_error_trap(regs, 0, "divide error", X86_TRAP_DE, SIGFPE,
|
do_error_trap(regs, 0, "divide error", X86_TRAP_DE, SIGFPE,
|
||||||
FPE_INTDIV, error_get_trap_addr(regs));
|
FPE_INTDIV, error_get_trap_addr(regs));
|
||||||
|
|
||||||
amd_clear_divider();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
DEFINE_IDTENTRY(exc_overflow)
|
DEFINE_IDTENTRY(exc_overflow)
|
||||||
|
|||||||
@@ -134,18 +134,18 @@ SECTIONS
|
|||||||
KPROBES_TEXT
|
KPROBES_TEXT
|
||||||
ALIGN_ENTRY_TEXT_BEGIN
|
ALIGN_ENTRY_TEXT_BEGIN
|
||||||
#ifdef CONFIG_CPU_SRSO
|
#ifdef CONFIG_CPU_SRSO
|
||||||
*(.text.__x86.rethunk_untrain)
|
*(.text..__x86.rethunk_untrain)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
ENTRY_TEXT
|
ENTRY_TEXT
|
||||||
|
|
||||||
#ifdef CONFIG_CPU_SRSO
|
#ifdef CONFIG_CPU_SRSO
|
||||||
/*
|
/*
|
||||||
* See the comment above srso_untrain_ret_alias()'s
|
* See the comment above srso_alias_untrain_ret()'s
|
||||||
* definition.
|
* definition.
|
||||||
*/
|
*/
|
||||||
. = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20);
|
. = srso_alias_untrain_ret | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20);
|
||||||
*(.text.__x86.rethunk_safe)
|
*(.text..__x86.rethunk_safe)
|
||||||
#endif
|
#endif
|
||||||
ALIGN_ENTRY_TEXT_END
|
ALIGN_ENTRY_TEXT_END
|
||||||
SOFTIRQENTRY_TEXT
|
SOFTIRQENTRY_TEXT
|
||||||
@@ -155,8 +155,8 @@ SECTIONS
|
|||||||
|
|
||||||
#ifdef CONFIG_RETPOLINE
|
#ifdef CONFIG_RETPOLINE
|
||||||
__indirect_thunk_start = .;
|
__indirect_thunk_start = .;
|
||||||
*(.text.__x86.indirect_thunk)
|
*(.text..__x86.indirect_thunk)
|
||||||
*(.text.__x86.return_thunk)
|
*(.text..__x86.return_thunk)
|
||||||
__indirect_thunk_end = .;
|
__indirect_thunk_end = .;
|
||||||
#endif
|
#endif
|
||||||
} :text =0xcccc
|
} :text =0xcccc
|
||||||
@@ -518,7 +518,7 @@ INIT_PER_CPU(irq_stack_backing_store);
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_RETHUNK
|
#ifdef CONFIG_RETHUNK
|
||||||
. = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned");
|
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
|
||||||
. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
|
. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@@ -533,8 +533,8 @@ INIT_PER_CPU(irq_stack_backing_store);
|
|||||||
* Instead do: (A | B) - (A & B) in order to compute the XOR
|
* Instead do: (A | B) - (A & B) in order to compute the XOR
|
||||||
* of the two function addresses:
|
* of the two function addresses:
|
||||||
*/
|
*/
|
||||||
. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) -
|
. = ASSERT(((ABSOLUTE(srso_alias_untrain_ret) | srso_alias_safe_ret) -
|
||||||
(ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
|
(ABSOLUTE(srso_alias_untrain_ret) & srso_alias_safe_ret)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
|
||||||
"SRSO function pair won't alias");
|
"SRSO function pair won't alias");
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|||||||
@@ -3376,6 +3376,7 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
|
|||||||
|
|
||||||
static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
|
static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
|
amd_clear_divider();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu)
|
static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu)
|
||||||
|
|||||||
+97
-44
@@ -11,7 +11,7 @@
|
|||||||
#include <asm/frame.h>
|
#include <asm/frame.h>
|
||||||
#include <asm/nops.h>
|
#include <asm/nops.h>
|
||||||
|
|
||||||
.section .text.__x86.indirect_thunk
|
.section .text..__x86.indirect_thunk
|
||||||
|
|
||||||
.macro RETPOLINE reg
|
.macro RETPOLINE reg
|
||||||
ANNOTATE_INTRA_FUNCTION_CALL
|
ANNOTATE_INTRA_FUNCTION_CALL
|
||||||
@@ -75,74 +75,105 @@ SYM_CODE_END(__x86_indirect_thunk_array)
|
|||||||
#ifdef CONFIG_RETHUNK
|
#ifdef CONFIG_RETHUNK
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at
|
* srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
|
||||||
* special addresses:
|
* special addresses:
|
||||||
*
|
*
|
||||||
* - srso_untrain_ret_alias() is 2M aligned
|
* - srso_alias_untrain_ret() is 2M aligned
|
||||||
* - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14
|
* - srso_alias_safe_ret() is also in the same 2M page but bits 2, 8, 14
|
||||||
* and 20 in its virtual address are set (while those bits in the
|
* and 20 in its virtual address are set (while those bits in the
|
||||||
* srso_untrain_ret_alias() function are cleared).
|
* srso_alias_untrain_ret() function are cleared).
|
||||||
*
|
*
|
||||||
* This guarantees that those two addresses will alias in the branch
|
* This guarantees that those two addresses will alias in the branch
|
||||||
* target buffer of Zen3/4 generations, leading to any potential
|
* target buffer of Zen3/4 generations, leading to any potential
|
||||||
* poisoned entries at that BTB slot to get evicted.
|
* poisoned entries at that BTB slot to get evicted.
|
||||||
*
|
*
|
||||||
* As a result, srso_safe_ret_alias() becomes a safe return.
|
* As a result, srso_alias_safe_ret() becomes a safe return.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_CPU_SRSO
|
#ifdef CONFIG_CPU_SRSO
|
||||||
.section .text.__x86.rethunk_untrain
|
.section .text..__x86.rethunk_untrain
|
||||||
|
|
||||||
SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
|
SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
|
||||||
|
UNWIND_HINT_FUNC
|
||||||
ASM_NOP2
|
ASM_NOP2
|
||||||
lfence
|
lfence
|
||||||
jmp __x86_return_thunk
|
jmp srso_alias_return_thunk
|
||||||
SYM_FUNC_END(srso_untrain_ret_alias)
|
SYM_FUNC_END(srso_alias_untrain_ret)
|
||||||
__EXPORT_THUNK(srso_untrain_ret_alias)
|
__EXPORT_THUNK(srso_alias_untrain_ret)
|
||||||
|
|
||||||
.section .text.__x86.rethunk_safe
|
.section .text..__x86.rethunk_safe
|
||||||
#endif
|
#else
|
||||||
|
/* dummy definition for alternatives */
|
||||||
/* Needs a definition for the __x86_return_thunk alternative below. */
|
SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
|
||||||
SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
|
|
||||||
#ifdef CONFIG_CPU_SRSO
|
|
||||||
add $8, %_ASM_SP
|
|
||||||
UNWIND_HINT_FUNC
|
|
||||||
#endif
|
|
||||||
ANNOTATE_UNRET_SAFE
|
ANNOTATE_UNRET_SAFE
|
||||||
ret
|
ret
|
||||||
int3
|
int3
|
||||||
SYM_FUNC_END(srso_safe_ret_alias)
|
SYM_FUNC_END(srso_alias_untrain_ret)
|
||||||
|
#endif
|
||||||
|
|
||||||
.section .text.__x86.return_thunk
|
SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
|
||||||
|
lea 8(%_ASM_SP), %_ASM_SP
|
||||||
|
UNWIND_HINT_FUNC
|
||||||
|
ANNOTATE_UNRET_SAFE
|
||||||
|
ret
|
||||||
|
int3
|
||||||
|
SYM_FUNC_END(srso_alias_safe_ret)
|
||||||
|
|
||||||
|
.section .text..__x86.return_thunk
|
||||||
|
|
||||||
|
SYM_CODE_START(srso_alias_return_thunk)
|
||||||
|
UNWIND_HINT_FUNC
|
||||||
|
ANNOTATE_NOENDBR
|
||||||
|
call srso_alias_safe_ret
|
||||||
|
ud2
|
||||||
|
SYM_CODE_END(srso_alias_return_thunk)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Some generic notes on the untraining sequences:
|
||||||
|
*
|
||||||
|
* They are interchangeable when it comes to flushing potentially wrong
|
||||||
|
* RET predictions from the BTB.
|
||||||
|
*
|
||||||
|
* The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the
|
||||||
|
* Retbleed sequence because the return sequence done there
|
||||||
|
* (srso_safe_ret()) is longer and the return sequence must fully nest
|
||||||
|
* (end before) the untraining sequence. Therefore, the untraining
|
||||||
|
* sequence must fully overlap the return sequence.
|
||||||
|
*
|
||||||
|
* Regarding alignment - the instructions which need to be untrained,
|
||||||
|
* must all start at a cacheline boundary for Zen1/2 generations. That
|
||||||
|
* is, instruction sequences starting at srso_safe_ret() and
|
||||||
|
* the respective instruction sequences at retbleed_return_thunk()
|
||||||
|
* must start at a cacheline boundary.
|
||||||
|
*/
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Safety details here pertain to the AMD Zen{1,2} microarchitecture:
|
* Safety details here pertain to the AMD Zen{1,2} microarchitecture:
|
||||||
* 1) The RET at __x86_return_thunk must be on a 64 byte boundary, for
|
* 1) The RET at retbleed_return_thunk must be on a 64 byte boundary, for
|
||||||
* alignment within the BTB.
|
* alignment within the BTB.
|
||||||
* 2) The instruction at zen_untrain_ret must contain, and not
|
* 2) The instruction at retbleed_untrain_ret must contain, and not
|
||||||
* end with, the 0xc3 byte of the RET.
|
* end with, the 0xc3 byte of the RET.
|
||||||
* 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread
|
* 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread
|
||||||
* from re-poisioning the BTB prediction.
|
* from re-poisioning the BTB prediction.
|
||||||
*/
|
*/
|
||||||
.align 64
|
.align 64
|
||||||
.skip 64 - (__ret - zen_untrain_ret), 0xcc
|
.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
|
||||||
SYM_FUNC_START_NOALIGN(zen_untrain_ret);
|
SYM_FUNC_START_NOALIGN(retbleed_untrain_ret);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* As executed from zen_untrain_ret, this is:
|
* As executed from retbleed_untrain_ret, this is:
|
||||||
*
|
*
|
||||||
* TEST $0xcc, %bl
|
* TEST $0xcc, %bl
|
||||||
* LFENCE
|
* LFENCE
|
||||||
* JMP __x86_return_thunk
|
* JMP retbleed_return_thunk
|
||||||
*
|
*
|
||||||
* Executing the TEST instruction has a side effect of evicting any BTB
|
* Executing the TEST instruction has a side effect of evicting any BTB
|
||||||
* prediction (potentially attacker controlled) attached to the RET, as
|
* prediction (potentially attacker controlled) attached to the RET, as
|
||||||
* __x86_return_thunk + 1 isn't an instruction boundary at the moment.
|
* retbleed_return_thunk + 1 isn't an instruction boundary at the moment.
|
||||||
*/
|
*/
|
||||||
.byte 0xf6
|
.byte 0xf6
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* As executed from __x86_return_thunk, this is a plain RET.
|
* As executed from retbleed_return_thunk, this is a plain RET.
|
||||||
*
|
*
|
||||||
* As part of the TEST above, RET is the ModRM byte, and INT3 the imm8.
|
* As part of the TEST above, RET is the ModRM byte, and INT3 the imm8.
|
||||||
*
|
*
|
||||||
@@ -154,13 +185,13 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret);
|
|||||||
* With SMT enabled and STIBP active, a sibling thread cannot poison
|
* With SMT enabled and STIBP active, a sibling thread cannot poison
|
||||||
* RET's prediction to a type of its choice, but can evict the
|
* RET's prediction to a type of its choice, but can evict the
|
||||||
* prediction due to competitive sharing. If the prediction is
|
* prediction due to competitive sharing. If the prediction is
|
||||||
* evicted, __x86_return_thunk will suffer Straight Line Speculation
|
* evicted, retbleed_return_thunk will suffer Straight Line Speculation
|
||||||
* which will be contained safely by the INT3.
|
* which will be contained safely by the INT3.
|
||||||
*/
|
*/
|
||||||
SYM_INNER_LABEL(__ret, SYM_L_GLOBAL)
|
SYM_INNER_LABEL(retbleed_return_thunk, SYM_L_GLOBAL)
|
||||||
ret
|
ret
|
||||||
int3
|
int3
|
||||||
SYM_CODE_END(__ret)
|
SYM_CODE_END(retbleed_return_thunk)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Ensure the TEST decoding / BTB invalidation is complete.
|
* Ensure the TEST decoding / BTB invalidation is complete.
|
||||||
@@ -171,16 +202,16 @@ SYM_CODE_END(__ret)
|
|||||||
* Jump back and execute the RET in the middle of the TEST instruction.
|
* Jump back and execute the RET in the middle of the TEST instruction.
|
||||||
* INT3 is for SLS protection.
|
* INT3 is for SLS protection.
|
||||||
*/
|
*/
|
||||||
jmp __ret
|
jmp retbleed_return_thunk
|
||||||
int3
|
int3
|
||||||
SYM_FUNC_END(zen_untrain_ret)
|
SYM_FUNC_END(retbleed_untrain_ret)
|
||||||
__EXPORT_THUNK(zen_untrain_ret)
|
__EXPORT_THUNK(retbleed_untrain_ret)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret()
|
* SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
|
||||||
* above. On kernel entry, srso_untrain_ret() is executed which is a
|
* above. On kernel entry, srso_untrain_ret() is executed which is a
|
||||||
*
|
*
|
||||||
* movabs $0xccccccc308c48348,%rax
|
* movabs $0xccccc30824648d48,%rax
|
||||||
*
|
*
|
||||||
* and when the return thunk executes the inner label srso_safe_ret()
|
* and when the return thunk executes the inner label srso_safe_ret()
|
||||||
* later, it is a stack manipulation and a RET which is mispredicted and
|
* later, it is a stack manipulation and a RET which is mispredicted and
|
||||||
@@ -191,22 +222,44 @@ __EXPORT_THUNK(zen_untrain_ret)
|
|||||||
SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
|
SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
|
||||||
.byte 0x48, 0xb8
|
.byte 0x48, 0xb8
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This forces the function return instruction to speculate into a trap
|
||||||
|
* (UD2 in srso_return_thunk() below). This RET will then mispredict
|
||||||
|
* and execution will continue at the return site read from the top of
|
||||||
|
* the stack.
|
||||||
|
*/
|
||||||
SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
|
SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
|
||||||
add $8, %_ASM_SP
|
lea 8(%_ASM_SP), %_ASM_SP
|
||||||
ret
|
ret
|
||||||
int3
|
int3
|
||||||
int3
|
int3
|
||||||
int3
|
/* end of movabs */
|
||||||
lfence
|
lfence
|
||||||
call srso_safe_ret
|
call srso_safe_ret
|
||||||
int3
|
ud2
|
||||||
SYM_CODE_END(srso_safe_ret)
|
SYM_CODE_END(srso_safe_ret)
|
||||||
SYM_FUNC_END(srso_untrain_ret)
|
SYM_FUNC_END(srso_untrain_ret)
|
||||||
__EXPORT_THUNK(srso_untrain_ret)
|
__EXPORT_THUNK(srso_untrain_ret)
|
||||||
|
|
||||||
SYM_FUNC_START(__x86_return_thunk)
|
SYM_CODE_START(srso_return_thunk)
|
||||||
ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \
|
UNWIND_HINT_FUNC
|
||||||
"call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS
|
ANNOTATE_NOENDBR
|
||||||
|
call srso_safe_ret
|
||||||
|
ud2
|
||||||
|
SYM_CODE_END(srso_return_thunk)
|
||||||
|
|
||||||
|
SYM_FUNC_START(entry_untrain_ret)
|
||||||
|
ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
|
||||||
|
"jmp srso_untrain_ret", X86_FEATURE_SRSO, \
|
||||||
|
"jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
|
||||||
|
SYM_FUNC_END(entry_untrain_ret)
|
||||||
|
__EXPORT_THUNK(entry_untrain_ret)
|
||||||
|
|
||||||
|
SYM_CODE_START(__x86_return_thunk)
|
||||||
|
UNWIND_HINT_FUNC
|
||||||
|
ANNOTATE_NOENDBR
|
||||||
|
ANNOTATE_UNRET_SAFE
|
||||||
|
ret
|
||||||
int3
|
int3
|
||||||
SYM_CODE_END(__x86_return_thunk)
|
SYM_CODE_END(__x86_return_thunk)
|
||||||
EXPORT_SYMBOL(__x86_return_thunk)
|
EXPORT_SYMBOL(__x86_return_thunk)
|
||||||
|
|||||||
@@ -432,6 +432,9 @@ static const struct usb_device_id blacklist_table[] = {
|
|||||||
{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
|
{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
|
||||||
BTUSB_WIDEBAND_SPEECH |
|
BTUSB_WIDEBAND_SPEECH |
|
||||||
BTUSB_VALID_LE_STATES },
|
BTUSB_VALID_LE_STATES },
|
||||||
|
{ USB_DEVICE(0x0489, 0xe0f5), .driver_info = BTUSB_MEDIATEK |
|
||||||
|
BTUSB_WIDEBAND_SPEECH |
|
||||||
|
BTUSB_VALID_LE_STATES },
|
||||||
{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
|
{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
|
||||||
BTUSB_WIDEBAND_SPEECH |
|
BTUSB_WIDEBAND_SPEECH |
|
||||||
BTUSB_VALID_LE_STATES },
|
BTUSB_VALID_LE_STATES },
|
||||||
|
|||||||
@@ -38,4 +38,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
|
|||||||
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
|
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
|
||||||
|
|
||||||
# MHI
|
# MHI
|
||||||
obj-$(CONFIG_MHI_BUS) += mhi/
|
obj-y += mhi/
|
||||||
|
|||||||
+2
-16
@@ -2,21 +2,7 @@
|
|||||||
#
|
#
|
||||||
# MHI bus
|
# MHI bus
|
||||||
#
|
#
|
||||||
# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
|
# Copyright (c) 2021, Linaro Ltd.
|
||||||
#
|
#
|
||||||
|
|
||||||
config MHI_BUS
|
source "drivers/bus/mhi/host/Kconfig"
|
||||||
tristate "Modem Host Interface (MHI) bus"
|
|
||||||
help
|
|
||||||
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
|
|
||||||
communication protocol used by the host processors to control
|
|
||||||
and communicate with modem devices over a high speed peripheral
|
|
||||||
bus or shared memory.
|
|
||||||
|
|
||||||
config MHI_BUS_DEBUG
|
|
||||||
bool "Debugfs support for the MHI bus"
|
|
||||||
depends on MHI_BUS && DEBUG_FS
|
|
||||||
help
|
|
||||||
Enable debugfs support for use with the MHI transport. Allows
|
|
||||||
reading and/or modifying some values within the MHI controller
|
|
||||||
for debug and test purposes.
|
|
||||||
|
|||||||
@@ -1,2 +1,2 @@
|
|||||||
# core layer
|
# Host MHI stack
|
||||||
obj-y += core/
|
obj-y += host/
|
||||||
|
|||||||
@@ -0,0 +1,31 @@
|
|||||||
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
#
|
||||||
|
# MHI bus
|
||||||
|
#
|
||||||
|
# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
|
||||||
|
#
|
||||||
|
|
||||||
|
config MHI_BUS
|
||||||
|
tristate "Modem Host Interface (MHI) bus"
|
||||||
|
help
|
||||||
|
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
|
||||||
|
communication protocol used by the host processors to control
|
||||||
|
and communicate with modem devices over a high speed peripheral
|
||||||
|
bus or shared memory.
|
||||||
|
|
||||||
|
config MHI_BUS_DEBUG
|
||||||
|
bool "Debugfs support for the MHI bus"
|
||||||
|
depends on MHI_BUS && DEBUG_FS
|
||||||
|
help
|
||||||
|
Enable debugfs support for use with the MHI transport. Allows
|
||||||
|
reading and/or modifying some values within the MHI controller
|
||||||
|
for debug and test purposes.
|
||||||
|
|
||||||
|
config MHI_BUS_PCI_GENERIC
|
||||||
|
tristate "MHI PCI controller driver"
|
||||||
|
depends on MHI_BUS
|
||||||
|
depends on PCI
|
||||||
|
help
|
||||||
|
This driver provides MHI PCI controller driver for devices such as
|
||||||
|
Qualcomm SDX55 based PCIe modems.
|
||||||
|
|
||||||
@@ -1,4 +1,6 @@
|
|||||||
obj-$(CONFIG_MHI_BUS) += mhi.o
|
obj-$(CONFIG_MHI_BUS) += mhi.o
|
||||||
|
|
||||||
mhi-y := init.o main.o pm.o boot.o
|
mhi-y := init.o main.o pm.o boot.o
|
||||||
mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
|
mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
|
||||||
|
|
||||||
|
obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
|
||||||
|
mhi_pci_generic-y += pci_generic.o
|
||||||
@@ -498,6 +498,12 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
|
|||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (val >= mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB)) {
|
||||||
|
dev_err(dev, "CHDB offset: 0x%x is out of range: 0x%zx\n",
|
||||||
|
val, mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB));
|
||||||
|
return -ERANGE;
|
||||||
|
}
|
||||||
|
|
||||||
/* Setup wake db */
|
/* Setup wake db */
|
||||||
mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
|
mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
|
||||||
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
|
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
|
||||||
@@ -517,6 +523,12 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
|
|||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (val >= mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings)) {
|
||||||
|
dev_err(dev, "ERDB offset: 0x%x is out of range: 0x%zx\n",
|
||||||
|
val, mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings));
|
||||||
|
return -ERANGE;
|
||||||
|
}
|
||||||
|
|
||||||
/* Setup event db address for each ev_ring */
|
/* Setup event db address for each ev_ring */
|
||||||
mhi_event = mhi_cntrl->mhi_event;
|
mhi_event = mhi_cntrl->mhi_event;
|
||||||
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
|
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
|
||||||
@@ -0,0 +1,345 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||||
|
/*
|
||||||
|
* MHI PCI driver - MHI over PCI controller driver
|
||||||
|
*
|
||||||
|
* This module is a generic driver for registering MHI-over-PCI devices,
|
||||||
|
* such as PCIe QCOM modems.
|
||||||
|
*
|
||||||
|
* Copyright (C) 2020 Linaro Ltd <loic.poulain@linaro.org>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/device.h>
|
||||||
|
#include <linux/mhi.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/pci.h>
|
||||||
|
|
||||||
|
#define MHI_PCI_DEFAULT_BAR_NUM 0
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct mhi_pci_dev_info - MHI PCI device specific information
|
||||||
|
* @config: MHI controller configuration
|
||||||
|
* @name: name of the PCI module
|
||||||
|
* @fw: firmware path (if any)
|
||||||
|
* @edl: emergency download mode firmware path (if any)
|
||||||
|
* @bar_num: PCI base address register to use for MHI MMIO register space
|
||||||
|
* @dma_data_width: DMA transfer word size (32 or 64 bits)
|
||||||
|
*/
|
||||||
|
struct mhi_pci_dev_info {
|
||||||
|
const struct mhi_controller_config *config;
|
||||||
|
const char *name;
|
||||||
|
const char *fw;
|
||||||
|
const char *edl;
|
||||||
|
unsigned int bar_num;
|
||||||
|
unsigned int dma_data_width;
|
||||||
|
};
|
||||||
|
|
||||||
|
#define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
|
||||||
|
{ \
|
||||||
|
.num = ch_num, \
|
||||||
|
.name = ch_name, \
|
||||||
|
.num_elements = el_count, \
|
||||||
|
.event_ring = ev_ring, \
|
||||||
|
.dir = DMA_TO_DEVICE, \
|
||||||
|
.ee_mask = BIT(MHI_EE_AMSS), \
|
||||||
|
.pollcfg = 0, \
|
||||||
|
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||||
|
.lpm_notify = false, \
|
||||||
|
.offload_channel = false, \
|
||||||
|
.doorbell_mode_switch = false, \
|
||||||
|
} \
|
||||||
|
|
||||||
|
#define MHI_CHANNEL_CONFIG_DL(ch_num, ch_name, el_count, ev_ring) \
|
||||||
|
{ \
|
||||||
|
.num = ch_num, \
|
||||||
|
.name = ch_name, \
|
||||||
|
.num_elements = el_count, \
|
||||||
|
.event_ring = ev_ring, \
|
||||||
|
.dir = DMA_FROM_DEVICE, \
|
||||||
|
.ee_mask = BIT(MHI_EE_AMSS), \
|
||||||
|
.pollcfg = 0, \
|
||||||
|
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||||
|
.lpm_notify = false, \
|
||||||
|
.offload_channel = false, \
|
||||||
|
.doorbell_mode_switch = false, \
|
||||||
|
}
|
||||||
|
|
||||||
|
#define MHI_EVENT_CONFIG_CTRL(ev_ring) \
|
||||||
|
{ \
|
||||||
|
.num_elements = 64, \
|
||||||
|
.irq_moderation_ms = 0, \
|
||||||
|
.irq = (ev_ring) + 1, \
|
||||||
|
.priority = 1, \
|
||||||
|
.mode = MHI_DB_BRST_DISABLE, \
|
||||||
|
.data_type = MHI_ER_CTRL, \
|
||||||
|
.hardware_event = false, \
|
||||||
|
.client_managed = false, \
|
||||||
|
.offload_channel = false, \
|
||||||
|
}
|
||||||
|
|
||||||
|
#define MHI_EVENT_CONFIG_DATA(ev_ring) \
|
||||||
|
{ \
|
||||||
|
.num_elements = 128, \
|
||||||
|
.irq_moderation_ms = 5, \
|
||||||
|
.irq = (ev_ring) + 1, \
|
||||||
|
.priority = 1, \
|
||||||
|
.mode = MHI_DB_BRST_DISABLE, \
|
||||||
|
.data_type = MHI_ER_DATA, \
|
||||||
|
.hardware_event = false, \
|
||||||
|
.client_managed = false, \
|
||||||
|
.offload_channel = false, \
|
||||||
|
}
|
||||||
|
|
||||||
|
#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, ch_num) \
|
||||||
|
{ \
|
||||||
|
.num_elements = 128, \
|
||||||
|
.irq_moderation_ms = 5, \
|
||||||
|
.irq = (ev_ring) + 1, \
|
||||||
|
.priority = 1, \
|
||||||
|
.mode = MHI_DB_BRST_DISABLE, \
|
||||||
|
.data_type = MHI_ER_DATA, \
|
||||||
|
.hardware_event = true, \
|
||||||
|
.client_managed = false, \
|
||||||
|
.offload_channel = false, \
|
||||||
|
.channel = ch_num, \
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
|
||||||
|
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 4, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 4, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
|
||||||
|
MHI_CHANNEL_CONFIG_UL(100, "IP_HW0", 128, 1),
|
||||||
|
MHI_CHANNEL_CONFIG_DL(101, "IP_HW0", 128, 2),
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct mhi_event_config modem_qcom_v1_mhi_events[] = {
|
||||||
|
/* first ring is control+data ring */
|
||||||
|
MHI_EVENT_CONFIG_CTRL(0),
|
||||||
|
/* Hardware channels request dedicated hardware event rings */
|
||||||
|
MHI_EVENT_CONFIG_HW_DATA(1, 100),
|
||||||
|
MHI_EVENT_CONFIG_HW_DATA(2, 101)
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
|
||||||
|
.max_channels = 128,
|
||||||
|
.timeout_ms = 5000,
|
||||||
|
.num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
|
||||||
|
.ch_cfg = modem_qcom_v1_mhi_channels,
|
||||||
|
.num_events = ARRAY_SIZE(modem_qcom_v1_mhi_events),
|
||||||
|
.event_cfg = modem_qcom_v1_mhi_events,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
|
||||||
|
.name = "qcom-sdx55m",
|
||||||
|
.fw = "qcom/sdx55m/sbl1.mbn",
|
||||||
|
.edl = "qcom/sdx55m/edl.mbn",
|
||||||
|
.config = &modem_qcom_v1_mhiv_config,
|
||||||
|
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
|
||||||
|
.dma_data_width = 32
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct pci_device_id mhi_pci_id_table[] = {
|
||||||
|
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
|
||||||
|
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
|
||||||
|
{ }
|
||||||
|
};
|
||||||
|
MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);
|
||||||
|
|
||||||
|
static int mhi_pci_read_reg(struct mhi_controller *mhi_cntrl,
|
||||||
|
void __iomem *addr, u32 *out)
|
||||||
|
{
|
||||||
|
*out = readl(addr);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mhi_pci_write_reg(struct mhi_controller *mhi_cntrl,
|
||||||
|
void __iomem *addr, u32 val)
|
||||||
|
{
|
||||||
|
writel(val, addr);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mhi_pci_status_cb(struct mhi_controller *mhi_cntrl,
|
||||||
|
enum mhi_callback cb)
|
||||||
|
{
|
||||||
|
/* Nothing to do for now */
|
||||||
|
}
|
||||||
|
|
||||||
|
static int mhi_pci_claim(struct mhi_controller *mhi_cntrl,
|
||||||
|
unsigned int bar_num, u64 dma_mask)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
|
||||||
|
int err;
|
||||||
|
|
||||||
|
err = pci_assign_resource(pdev, bar_num);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
|
err = pcim_enable_device(pdev);
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "failed to enable pci device: %d\n", err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pcim_iomap_regions(pdev, 1 << bar_num, pci_name(pdev));
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "failed to map pci region: %d\n", err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
mhi_cntrl->regs = pcim_iomap_table(pdev)[bar_num];
|
||||||
|
|
||||||
|
err = pci_set_dma_mask(pdev, dma_mask);
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "Cannot set proper DMA mask\n");
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pci_set_consistent_dma_mask(pdev, dma_mask);
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "set consistent dma mask failed\n");
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
pci_set_master(pdev);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
|
||||||
|
const struct mhi_controller_config *mhi_cntrl_config)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
|
||||||
|
int nr_vectors, i;
|
||||||
|
int *irq;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Alloc one MSI vector for BHI + one vector per event ring, ideally...
|
||||||
|
* No explicit pci_free_irq_vectors required, done by pcim_release.
|
||||||
|
*/
|
||||||
|
mhi_cntrl->nr_irqs = 1 + mhi_cntrl_config->num_events;
|
||||||
|
|
||||||
|
nr_vectors = pci_alloc_irq_vectors(pdev, 1, mhi_cntrl->nr_irqs, PCI_IRQ_MSI);
|
||||||
|
if (nr_vectors < 0) {
|
||||||
|
dev_err(&pdev->dev, "Error allocating MSI vectors %d\n",
|
||||||
|
nr_vectors);
|
||||||
|
return nr_vectors;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (nr_vectors < mhi_cntrl->nr_irqs) {
|
||||||
|
dev_warn(&pdev->dev, "Not enough MSI vectors (%d/%d), use shared MSI\n",
|
||||||
|
nr_vectors, mhi_cntrl_config->num_events);
|
||||||
|
}
|
||||||
|
|
||||||
|
irq = devm_kcalloc(&pdev->dev, mhi_cntrl->nr_irqs, sizeof(int), GFP_KERNEL);
|
||||||
|
if (!irq)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
for (i = 0; i < mhi_cntrl->nr_irqs; i++) {
|
||||||
|
int vector = i >= nr_vectors ? (nr_vectors - 1) : i;
|
||||||
|
|
||||||
|
irq[i] = pci_irq_vector(pdev, vector);
|
||||||
|
}
|
||||||
|
|
||||||
|
mhi_cntrl->irq = irq;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int mhi_pci_runtime_get(struct mhi_controller *mhi_cntrl)
|
||||||
|
{
|
||||||
|
/* no PM for now */
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mhi_pci_runtime_put(struct mhi_controller *mhi_cntrl)
|
||||||
|
{
|
||||||
|
/* no PM for now */
|
||||||
|
}
|
||||||
|
|
||||||
|
static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||||
|
{
|
||||||
|
const struct mhi_pci_dev_info *info = (struct mhi_pci_dev_info *) id->driver_data;
|
||||||
|
const struct mhi_controller_config *mhi_cntrl_config;
|
||||||
|
struct mhi_controller *mhi_cntrl;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
dev_dbg(&pdev->dev, "MHI PCI device found: %s\n", info->name);
|
||||||
|
|
||||||
|
mhi_cntrl = mhi_alloc_controller();
|
||||||
|
if (!mhi_cntrl)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
mhi_cntrl_config = info->config;
|
||||||
|
mhi_cntrl->cntrl_dev = &pdev->dev;
|
||||||
|
mhi_cntrl->iova_start = 0;
|
||||||
|
mhi_cntrl->iova_stop = DMA_BIT_MASK(info->dma_data_width);
|
||||||
|
mhi_cntrl->fw_image = info->fw;
|
||||||
|
mhi_cntrl->edl_image = info->edl;
|
||||||
|
|
||||||
|
mhi_cntrl->read_reg = mhi_pci_read_reg;
|
||||||
|
mhi_cntrl->write_reg = mhi_pci_write_reg;
|
||||||
|
mhi_cntrl->status_cb = mhi_pci_status_cb;
|
||||||
|
mhi_cntrl->runtime_get = mhi_pci_runtime_get;
|
||||||
|
mhi_cntrl->runtime_put = mhi_pci_runtime_put;
|
||||||
|
|
||||||
|
err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
|
||||||
|
if (err)
|
||||||
|
goto err_release;
|
||||||
|
|
||||||
|
err = mhi_pci_get_irqs(mhi_cntrl, mhi_cntrl_config);
|
||||||
|
if (err)
|
||||||
|
goto err_release;
|
||||||
|
|
||||||
|
pci_set_drvdata(pdev, mhi_cntrl);
|
||||||
|
|
||||||
|
err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config);
|
||||||
|
if (err)
|
||||||
|
goto err_release;
|
||||||
|
|
||||||
|
/* MHI bus does not power up the controller by default */
|
||||||
|
err = mhi_prepare_for_power_up(mhi_cntrl);
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "failed to prepare MHI controller\n");
|
||||||
|
goto err_unregister;
|
||||||
|
}
|
||||||
|
|
||||||
|
err = mhi_sync_power_up(mhi_cntrl);
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "failed to power up MHI controller\n");
|
||||||
|
goto err_unprepare;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_unprepare:
|
||||||
|
mhi_unprepare_after_power_down(mhi_cntrl);
|
||||||
|
err_unregister:
|
||||||
|
mhi_unregister_controller(mhi_cntrl);
|
||||||
|
err_release:
|
||||||
|
mhi_free_controller(mhi_cntrl);
|
||||||
|
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mhi_pci_remove(struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
struct mhi_controller *mhi_cntrl = pci_get_drvdata(pdev);
|
||||||
|
|
||||||
|
mhi_power_down(mhi_cntrl, true);
|
||||||
|
mhi_unprepare_after_power_down(mhi_cntrl);
|
||||||
|
mhi_unregister_controller(mhi_cntrl);
|
||||||
|
mhi_free_controller(mhi_cntrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct pci_driver mhi_pci_driver = {
|
||||||
|
.name = "mhi-pci-generic",
|
||||||
|
.id_table = mhi_pci_id_table,
|
||||||
|
.probe = mhi_pci_probe,
|
||||||
|
.remove = mhi_pci_remove
|
||||||
|
};
|
||||||
|
module_pci_driver(mhi_pci_driver);
|
||||||
|
|
||||||
|
MODULE_AUTHOR("Loic Poulain <loic.poulain@linaro.org>");
|
||||||
|
MODULE_DESCRIPTION("Modem Host Interface (MHI) PCI controller driver");
|
||||||
|
MODULE_LICENSE("GPL");
|
||||||
@@ -2078,6 +2078,8 @@ static int sysc_reset(struct sysc *ddata)
|
|||||||
sysc_val = sysc_read_sysconfig(ddata);
|
sysc_val = sysc_read_sysconfig(ddata);
|
||||||
sysc_val |= sysc_mask;
|
sysc_val |= sysc_mask;
|
||||||
sysc_write(ddata, sysc_offset, sysc_val);
|
sysc_write(ddata, sysc_offset, sysc_val);
|
||||||
|
/* Flush posted write */
|
||||||
|
sysc_val = sysc_read_sysconfig(ddata);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ddata->cfg.srst_udelay)
|
if (ddata->cfg.srst_udelay)
|
||||||
|
|||||||
@@ -1517,15 +1517,15 @@ static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
r = dma_fence_wait_timeout(fence, true, timeout);
|
r = dma_fence_wait_timeout(fence, true, timeout);
|
||||||
|
if (r > 0 && fence->error)
|
||||||
|
r = fence->error;
|
||||||
|
|
||||||
dma_fence_put(fence);
|
dma_fence_put(fence);
|
||||||
if (r < 0)
|
if (r < 0)
|
||||||
return r;
|
return r;
|
||||||
|
|
||||||
if (r == 0)
|
if (r == 0)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
if (fence->error)
|
|
||||||
return fence->error;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(wait, 0, sizeof(*wait));
|
memset(wait, 0, sizeof(*wait));
|
||||||
|
|||||||
@@ -2155,6 +2155,7 @@ struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev,
|
|||||||
amdgpu_vm_bo_base_init(&bo_va->base, vm, bo);
|
amdgpu_vm_bo_base_init(&bo_va->base, vm, bo);
|
||||||
|
|
||||||
bo_va->ref_count = 1;
|
bo_va->ref_count = 1;
|
||||||
|
bo_va->last_pt_update = dma_fence_get_stub();
|
||||||
INIT_LIST_HEAD(&bo_va->valids);
|
INIT_LIST_HEAD(&bo_va->valids);
|
||||||
INIT_LIST_HEAD(&bo_va->invalids);
|
INIT_LIST_HEAD(&bo_va->invalids);
|
||||||
|
|
||||||
@@ -2867,7 +2868,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
|
|||||||
vm->update_funcs = &amdgpu_vm_cpu_funcs;
|
vm->update_funcs = &amdgpu_vm_cpu_funcs;
|
||||||
else
|
else
|
||||||
vm->update_funcs = &amdgpu_vm_sdma_funcs;
|
vm->update_funcs = &amdgpu_vm_sdma_funcs;
|
||||||
vm->last_update = NULL;
|
|
||||||
|
vm->last_update = dma_fence_get_stub();
|
||||||
vm->last_unlocked = dma_fence_get_stub();
|
vm->last_unlocked = dma_fence_get_stub();
|
||||||
|
|
||||||
mutex_init(&vm->eviction_lock);
|
mutex_init(&vm->eviction_lock);
|
||||||
@@ -3042,7 +3044,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm,
|
|||||||
vm->update_funcs = &amdgpu_vm_sdma_funcs;
|
vm->update_funcs = &amdgpu_vm_sdma_funcs;
|
||||||
}
|
}
|
||||||
dma_fence_put(vm->last_update);
|
dma_fence_put(vm->last_update);
|
||||||
vm->last_update = NULL;
|
vm->last_update = dma_fence_get_stub();
|
||||||
vm->is_compute_context = true;
|
vm->is_compute_context = true;
|
||||||
|
|
||||||
if (vm->pasid) {
|
if (vm->pasid) {
|
||||||
|
|||||||
@@ -1010,21 +1010,21 @@ static const struct panel_desc auo_g104sn02 = {
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct drm_display_mode auo_g121ean01_mode = {
|
static const struct display_timing auo_g121ean01_timing = {
|
||||||
.clock = 66700,
|
.pixelclock = { 60000000, 74400000, 90000000 },
|
||||||
.hdisplay = 1280,
|
.hactive = { 1280, 1280, 1280 },
|
||||||
.hsync_start = 1280 + 58,
|
.hfront_porch = { 20, 50, 100 },
|
||||||
.hsync_end = 1280 + 58 + 8,
|
.hback_porch = { 20, 50, 100 },
|
||||||
.htotal = 1280 + 58 + 8 + 70,
|
.hsync_len = { 30, 100, 200 },
|
||||||
.vdisplay = 800,
|
.vactive = { 800, 800, 800 },
|
||||||
.vsync_start = 800 + 6,
|
.vfront_porch = { 2, 10, 25 },
|
||||||
.vsync_end = 800 + 6 + 4,
|
.vback_porch = { 2, 10, 25 },
|
||||||
.vtotal = 800 + 6 + 4 + 10,
|
.vsync_len = { 4, 18, 50 },
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct panel_desc auo_g121ean01 = {
|
static const struct panel_desc auo_g121ean01 = {
|
||||||
.modes = &auo_g121ean01_mode,
|
.timings = &auo_g121ean01_timing,
|
||||||
.num_modes = 1,
|
.num_timings = 1,
|
||||||
.bpc = 8,
|
.bpc = 8,
|
||||||
.size = {
|
.size = {
|
||||||
.width = 261,
|
.width = 261,
|
||||||
|
|||||||
@@ -271,7 +271,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
|
|||||||
{
|
{
|
||||||
struct drm_radeon_cs *cs = data;
|
struct drm_radeon_cs *cs = data;
|
||||||
uint64_t *chunk_array_ptr;
|
uint64_t *chunk_array_ptr;
|
||||||
unsigned size, i;
|
u64 size;
|
||||||
|
unsigned i;
|
||||||
u32 ring = RADEON_CS_RING_GFX;
|
u32 ring = RADEON_CS_RING_GFX;
|
||||||
s32 priority = 0;
|
s32 priority = 0;
|
||||||
|
|
||||||
|
|||||||
@@ -582,6 +582,7 @@
|
|||||||
#define USB_DEVICE_ID_UGCI_FIGHTING 0x0030
|
#define USB_DEVICE_ID_UGCI_FIGHTING 0x0030
|
||||||
|
|
||||||
#define USB_VENDOR_ID_HP 0x03f0
|
#define USB_VENDOR_ID_HP 0x03f0
|
||||||
|
#define USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A 0x464a
|
||||||
#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A 0x0a4a
|
#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A 0x0a4a
|
||||||
#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A 0x0b4a
|
#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A 0x0b4a
|
||||||
#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE 0x134a
|
#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE 0x134a
|
||||||
|
|||||||
@@ -96,6 +96,7 @@ static const struct hid_device_id hid_quirks[] = {
|
|||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096), HID_QUIRK_NO_INIT_REPORTS },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096), HID_QUIRK_NO_INIT_REPORTS },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293), HID_QUIRK_ALWAYS_POLL },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A), HID_QUIRK_ALWAYS_POLL },
|
||||||
|
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A), HID_QUIRK_MULTI_INPUT },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL },
|
||||||
|
|||||||
@@ -242,13 +242,14 @@ static inline u32 iproc_i2c_rd_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
|
|||||||
u32 offset)
|
u32 offset)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
if (iproc_i2c->idm_base) {
|
if (iproc_i2c->idm_base) {
|
||||||
spin_lock(&iproc_i2c->idm_lock);
|
spin_lock_irqsave(&iproc_i2c->idm_lock, flags);
|
||||||
writel(iproc_i2c->ape_addr_mask,
|
writel(iproc_i2c->ape_addr_mask,
|
||||||
iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
|
iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
|
||||||
val = readl(iproc_i2c->base + offset);
|
val = readl(iproc_i2c->base + offset);
|
||||||
spin_unlock(&iproc_i2c->idm_lock);
|
spin_unlock_irqrestore(&iproc_i2c->idm_lock, flags);
|
||||||
} else {
|
} else {
|
||||||
val = readl(iproc_i2c->base + offset);
|
val = readl(iproc_i2c->base + offset);
|
||||||
}
|
}
|
||||||
@@ -259,12 +260,14 @@ static inline u32 iproc_i2c_rd_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
|
|||||||
static inline void iproc_i2c_wr_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
|
static inline void iproc_i2c_wr_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
|
||||||
u32 offset, u32 val)
|
u32 offset, u32 val)
|
||||||
{
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
if (iproc_i2c->idm_base) {
|
if (iproc_i2c->idm_base) {
|
||||||
spin_lock(&iproc_i2c->idm_lock);
|
spin_lock_irqsave(&iproc_i2c->idm_lock, flags);
|
||||||
writel(iproc_i2c->ape_addr_mask,
|
writel(iproc_i2c->ape_addr_mask,
|
||||||
iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
|
iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
|
||||||
writel(val, iproc_i2c->base + offset);
|
writel(val, iproc_i2c->base + offset);
|
||||||
spin_unlock(&iproc_i2c->idm_lock);
|
spin_unlock_irqrestore(&iproc_i2c->idm_lock, flags);
|
||||||
} else {
|
} else {
|
||||||
writel(val, iproc_i2c->base + offset);
|
writel(val, iproc_i2c->base + offset);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -432,8 +432,19 @@ i2c_dw_read(struct dw_i2c_dev *dev)
|
|||||||
|
|
||||||
regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);
|
regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);
|
||||||
/* Ensure length byte is a valid value */
|
/* Ensure length byte is a valid value */
|
||||||
if (flags & I2C_M_RECV_LEN &&
|
if (flags & I2C_M_RECV_LEN) {
|
||||||
tmp <= I2C_SMBUS_BLOCK_MAX && tmp > 0) {
|
/*
|
||||||
|
* if IC_EMPTYFIFO_HOLD_MASTER_EN is set, which cannot be
|
||||||
|
* detected from the registers, the controller can be
|
||||||
|
* disabled if the STOP bit is set. But it is only set
|
||||||
|
* after receiving block data response length in
|
||||||
|
* I2C_FUNC_SMBUS_BLOCK_DATA case. That needs to read
|
||||||
|
* another byte with STOP bit set when the block data
|
||||||
|
* response length is invalid to complete the transaction.
|
||||||
|
*/
|
||||||
|
if (!tmp || tmp > I2C_SMBUS_BLOCK_MAX)
|
||||||
|
tmp = 1;
|
||||||
|
|
||||||
len = i2c_dw_recv_len(dev, tmp);
|
len = i2c_dw_recv_len(dev, tmp);
|
||||||
}
|
}
|
||||||
*buf++ = tmp;
|
*buf++ = tmp;
|
||||||
|
|||||||
@@ -70,6 +70,7 @@ config IIO_TRIGGERED_EVENT
|
|||||||
|
|
||||||
source "drivers/iio/accel/Kconfig"
|
source "drivers/iio/accel/Kconfig"
|
||||||
source "drivers/iio/adc/Kconfig"
|
source "drivers/iio/adc/Kconfig"
|
||||||
|
source "drivers/iio/addac/Kconfig"
|
||||||
source "drivers/iio/afe/Kconfig"
|
source "drivers/iio/afe/Kconfig"
|
||||||
source "drivers/iio/amplifiers/Kconfig"
|
source "drivers/iio/amplifiers/Kconfig"
|
||||||
source "drivers/iio/chemical/Kconfig"
|
source "drivers/iio/chemical/Kconfig"
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ obj-$(CONFIG_IIO_TRIGGERED_EVENT) += industrialio-triggered-event.o
|
|||||||
|
|
||||||
obj-y += accel/
|
obj-y += accel/
|
||||||
obj-y += adc/
|
obj-y += adc/
|
||||||
|
obj-y += addac/
|
||||||
obj-y += afe/
|
obj-y += afe/
|
||||||
obj-y += amplifiers/
|
obj-y += amplifiers/
|
||||||
obj-y += buffer/
|
obj-y += buffer/
|
||||||
|
|||||||
+69
-29
@@ -15,7 +15,9 @@
|
|||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/moduleparam.h>
|
#include <linux/moduleparam.h>
|
||||||
|
#include <linux/mutex.h>
|
||||||
#include <linux/spinlock.h>
|
#include <linux/spinlock.h>
|
||||||
|
#include <linux/types.h>
|
||||||
|
|
||||||
#define STX104_OUT_CHAN(chan) { \
|
#define STX104_OUT_CHAN(chan) { \
|
||||||
.type = IIO_VOLTAGE, \
|
.type = IIO_VOLTAGE, \
|
||||||
@@ -44,14 +46,38 @@ static unsigned int num_stx104;
|
|||||||
module_param_hw_array(base, uint, ioport, &num_stx104, 0);
|
module_param_hw_array(base, uint, ioport, &num_stx104, 0);
|
||||||
MODULE_PARM_DESC(base, "Apex Embedded Systems STX104 base addresses");
|
MODULE_PARM_DESC(base, "Apex Embedded Systems STX104 base addresses");
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct stx104_reg - device register structure
|
||||||
|
* @ssr_ad: Software Strobe Register and ADC Data
|
||||||
|
* @achan: ADC Channel
|
||||||
|
* @dio: Digital I/O
|
||||||
|
* @dac: DAC Channels
|
||||||
|
* @cir_asr: Clear Interrupts and ADC Status
|
||||||
|
* @acr: ADC Control
|
||||||
|
* @pccr_fsh: Pacer Clock Control and FIFO Status MSB
|
||||||
|
* @acfg: ADC Configuration
|
||||||
|
*/
|
||||||
|
struct stx104_reg {
|
||||||
|
u16 ssr_ad;
|
||||||
|
u8 achan;
|
||||||
|
u8 dio;
|
||||||
|
u16 dac[2];
|
||||||
|
u8 cir_asr;
|
||||||
|
u8 acr;
|
||||||
|
u8 pccr_fsh;
|
||||||
|
u8 acfg;
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct stx104_iio - IIO device private data structure
|
* struct stx104_iio - IIO device private data structure
|
||||||
|
* @lock: synchronization lock to prevent I/O race conditions
|
||||||
* @chan_out_states: channels' output states
|
* @chan_out_states: channels' output states
|
||||||
* @base: base port address of the IIO device
|
* @reg: I/O address offset for the device registers
|
||||||
*/
|
*/
|
||||||
struct stx104_iio {
|
struct stx104_iio {
|
||||||
|
struct mutex lock;
|
||||||
unsigned int chan_out_states[STX104_NUM_OUT_CHAN];
|
unsigned int chan_out_states[STX104_NUM_OUT_CHAN];
|
||||||
unsigned int base;
|
struct stx104_reg __iomem *reg;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -64,7 +90,7 @@ struct stx104_iio {
|
|||||||
struct stx104_gpio {
|
struct stx104_gpio {
|
||||||
struct gpio_chip chip;
|
struct gpio_chip chip;
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
unsigned int base;
|
u8 __iomem *base;
|
||||||
unsigned int out_state;
|
unsigned int out_state;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -72,6 +98,7 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
|
|||||||
struct iio_chan_spec const *chan, int *val, int *val2, long mask)
|
struct iio_chan_spec const *chan, int *val, int *val2, long mask)
|
||||||
{
|
{
|
||||||
struct stx104_iio *const priv = iio_priv(indio_dev);
|
struct stx104_iio *const priv = iio_priv(indio_dev);
|
||||||
|
struct stx104_reg __iomem *const reg = priv->reg;
|
||||||
unsigned int adc_config;
|
unsigned int adc_config;
|
||||||
int adbu;
|
int adbu;
|
||||||
int gain;
|
int gain;
|
||||||
@@ -79,7 +106,7 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
|
|||||||
switch (mask) {
|
switch (mask) {
|
||||||
case IIO_CHAN_INFO_HARDWAREGAIN:
|
case IIO_CHAN_INFO_HARDWAREGAIN:
|
||||||
/* get gain configuration */
|
/* get gain configuration */
|
||||||
adc_config = inb(priv->base + 11);
|
adc_config = ioread8(®->acfg);
|
||||||
gain = adc_config & 0x3;
|
gain = adc_config & 0x3;
|
||||||
|
|
||||||
*val = 1 << gain;
|
*val = 1 << gain;
|
||||||
@@ -90,25 +117,31 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
|
|||||||
return IIO_VAL_INT;
|
return IIO_VAL_INT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mutex_lock(&priv->lock);
|
||||||
|
|
||||||
/* select ADC channel */
|
/* select ADC channel */
|
||||||
outb(chan->channel | (chan->channel << 4), priv->base + 2);
|
iowrite8(chan->channel | (chan->channel << 4), ®->achan);
|
||||||
|
|
||||||
/* trigger ADC sample capture and wait for completion */
|
/* trigger ADC sample capture by writing to the 8-bit
|
||||||
outb(0, priv->base);
|
* Software Strobe Register and wait for completion
|
||||||
while (inb(priv->base + 8) & BIT(7));
|
*/
|
||||||
|
iowrite8(0, ®->ssr_ad);
|
||||||
|
while (ioread8(®->cir_asr) & BIT(7));
|
||||||
|
|
||||||
*val = inw(priv->base);
|
*val = ioread16(®->ssr_ad);
|
||||||
|
|
||||||
|
mutex_unlock(&priv->lock);
|
||||||
return IIO_VAL_INT;
|
return IIO_VAL_INT;
|
||||||
case IIO_CHAN_INFO_OFFSET:
|
case IIO_CHAN_INFO_OFFSET:
|
||||||
/* get ADC bipolar/unipolar configuration */
|
/* get ADC bipolar/unipolar configuration */
|
||||||
adc_config = inb(priv->base + 11);
|
adc_config = ioread8(®->acfg);
|
||||||
adbu = !(adc_config & BIT(2));
|
adbu = !(adc_config & BIT(2));
|
||||||
|
|
||||||
*val = -32768 * adbu;
|
*val = -32768 * adbu;
|
||||||
return IIO_VAL_INT;
|
return IIO_VAL_INT;
|
||||||
case IIO_CHAN_INFO_SCALE:
|
case IIO_CHAN_INFO_SCALE:
|
||||||
/* get ADC bipolar/unipolar and gain configuration */
|
/* get ADC bipolar/unipolar and gain configuration */
|
||||||
adc_config = inb(priv->base + 11);
|
adc_config = ioread8(®->acfg);
|
||||||
adbu = !(adc_config & BIT(2));
|
adbu = !(adc_config & BIT(2));
|
||||||
gain = adc_config & 0x3;
|
gain = adc_config & 0x3;
|
||||||
|
|
||||||
@@ -130,16 +163,16 @@ static int stx104_write_raw(struct iio_dev *indio_dev,
|
|||||||
/* Only four gain states (x1, x2, x4, x8) */
|
/* Only four gain states (x1, x2, x4, x8) */
|
||||||
switch (val) {
|
switch (val) {
|
||||||
case 1:
|
case 1:
|
||||||
outb(0, priv->base + 11);
|
iowrite8(0, &priv->reg->acfg);
|
||||||
break;
|
break;
|
||||||
case 2:
|
case 2:
|
||||||
outb(1, priv->base + 11);
|
iowrite8(1, &priv->reg->acfg);
|
||||||
break;
|
break;
|
||||||
case 4:
|
case 4:
|
||||||
outb(2, priv->base + 11);
|
iowrite8(2, &priv->reg->acfg);
|
||||||
break;
|
break;
|
||||||
case 8:
|
case 8:
|
||||||
outb(3, priv->base + 11);
|
iowrite8(3, &priv->reg->acfg);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@@ -152,9 +185,12 @@ static int stx104_write_raw(struct iio_dev *indio_dev,
|
|||||||
if ((unsigned int)val > 65535)
|
if ((unsigned int)val > 65535)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
priv->chan_out_states[chan->channel] = val;
|
mutex_lock(&priv->lock);
|
||||||
outw(val, priv->base + 4 + 2 * chan->channel);
|
|
||||||
|
|
||||||
|
priv->chan_out_states[chan->channel] = val;
|
||||||
|
iowrite16(val, &priv->reg->dac[chan->channel]);
|
||||||
|
|
||||||
|
mutex_unlock(&priv->lock);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@@ -222,7 +258,7 @@ static int stx104_gpio_get(struct gpio_chip *chip, unsigned int offset)
|
|||||||
if (offset >= 4)
|
if (offset >= 4)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
return !!(inb(stx104gpio->base) & BIT(offset));
|
return !!(ioread8(stx104gpio->base) & BIT(offset));
|
||||||
}
|
}
|
||||||
|
|
||||||
static int stx104_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask,
|
static int stx104_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask,
|
||||||
@@ -230,7 +266,7 @@ static int stx104_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask,
|
|||||||
{
|
{
|
||||||
struct stx104_gpio *const stx104gpio = gpiochip_get_data(chip);
|
struct stx104_gpio *const stx104gpio = gpiochip_get_data(chip);
|
||||||
|
|
||||||
*bits = inb(stx104gpio->base);
|
*bits = ioread8(stx104gpio->base);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -252,7 +288,7 @@ static void stx104_gpio_set(struct gpio_chip *chip, unsigned int offset,
|
|||||||
else
|
else
|
||||||
stx104gpio->out_state &= ~mask;
|
stx104gpio->out_state &= ~mask;
|
||||||
|
|
||||||
outb(stx104gpio->out_state, stx104gpio->base);
|
iowrite8(stx104gpio->out_state, stx104gpio->base);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&stx104gpio->lock, flags);
|
spin_unlock_irqrestore(&stx104gpio->lock, flags);
|
||||||
}
|
}
|
||||||
@@ -279,7 +315,7 @@ static void stx104_gpio_set_multiple(struct gpio_chip *chip,
|
|||||||
|
|
||||||
stx104gpio->out_state &= ~*mask;
|
stx104gpio->out_state &= ~*mask;
|
||||||
stx104gpio->out_state |= *mask & *bits;
|
stx104gpio->out_state |= *mask & *bits;
|
||||||
outb(stx104gpio->out_state, stx104gpio->base);
|
iowrite8(stx104gpio->out_state, stx104gpio->base);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&stx104gpio->lock, flags);
|
spin_unlock_irqrestore(&stx104gpio->lock, flags);
|
||||||
}
|
}
|
||||||
@@ -306,11 +342,16 @@ static int stx104_probe(struct device *dev, unsigned int id)
|
|||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
priv = iio_priv(indio_dev);
|
||||||
|
priv->reg = devm_ioport_map(dev, base[id], STX104_EXTENT);
|
||||||
|
if (!priv->reg)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
indio_dev->info = &stx104_info;
|
indio_dev->info = &stx104_info;
|
||||||
indio_dev->modes = INDIO_DIRECT_MODE;
|
indio_dev->modes = INDIO_DIRECT_MODE;
|
||||||
|
|
||||||
/* determine if differential inputs */
|
/* determine if differential inputs */
|
||||||
if (inb(base[id] + 8) & BIT(5)) {
|
if (ioread8(&priv->reg->cir_asr) & BIT(5)) {
|
||||||
indio_dev->num_channels = ARRAY_SIZE(stx104_channels_diff);
|
indio_dev->num_channels = ARRAY_SIZE(stx104_channels_diff);
|
||||||
indio_dev->channels = stx104_channels_diff;
|
indio_dev->channels = stx104_channels_diff;
|
||||||
} else {
|
} else {
|
||||||
@@ -320,18 +361,17 @@ static int stx104_probe(struct device *dev, unsigned int id)
|
|||||||
|
|
||||||
indio_dev->name = dev_name(dev);
|
indio_dev->name = dev_name(dev);
|
||||||
|
|
||||||
priv = iio_priv(indio_dev);
|
mutex_init(&priv->lock);
|
||||||
priv->base = base[id];
|
|
||||||
|
|
||||||
/* configure device for software trigger operation */
|
/* configure device for software trigger operation */
|
||||||
outb(0, base[id] + 9);
|
iowrite8(0, &priv->reg->acr);
|
||||||
|
|
||||||
/* initialize gain setting to x1 */
|
/* initialize gain setting to x1 */
|
||||||
outb(0, base[id] + 11);
|
iowrite8(0, &priv->reg->acfg);
|
||||||
|
|
||||||
/* initialize DAC output to 0V */
|
/* initialize DAC output to 0V */
|
||||||
outw(0, base[id] + 4);
|
iowrite16(0, &priv->reg->dac[0]);
|
||||||
outw(0, base[id] + 6);
|
iowrite16(0, &priv->reg->dac[1]);
|
||||||
|
|
||||||
stx104gpio->chip.label = dev_name(dev);
|
stx104gpio->chip.label = dev_name(dev);
|
||||||
stx104gpio->chip.parent = dev;
|
stx104gpio->chip.parent = dev;
|
||||||
@@ -346,7 +386,7 @@ static int stx104_probe(struct device *dev, unsigned int id)
|
|||||||
stx104gpio->chip.get_multiple = stx104_gpio_get_multiple;
|
stx104gpio->chip.get_multiple = stx104_gpio_get_multiple;
|
||||||
stx104gpio->chip.set = stx104_gpio_set;
|
stx104gpio->chip.set = stx104_gpio_set;
|
||||||
stx104gpio->chip.set_multiple = stx104_gpio_set_multiple;
|
stx104gpio->chip.set_multiple = stx104_gpio_set_multiple;
|
||||||
stx104gpio->base = base[id] + 3;
|
stx104gpio->base = &priv->reg->dio;
|
||||||
stx104gpio->out_state = 0x0;
|
stx104gpio->out_state = 0x0;
|
||||||
|
|
||||||
spin_lock_init(&stx104gpio->lock);
|
spin_lock_init(&stx104gpio->lock);
|
||||||
|
|||||||
@@ -0,0 +1,8 @@
|
|||||||
|
#
|
||||||
|
# ADC DAC drivers
|
||||||
|
#
|
||||||
|
# When adding new entries keep the list in alphabetical order
|
||||||
|
|
||||||
|
menu "Analog to digital and digital to analog converters"
|
||||||
|
|
||||||
|
endmenu
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
#
|
||||||
|
# Makefile for industrial I/O ADDAC drivers
|
||||||
|
#
|
||||||
|
|
||||||
|
# When adding new entries keep the list in alphabetical order
|
||||||
@@ -297,8 +297,7 @@ int mlx5_core_destroy_qp(struct mlx5_ib_dev *dev, struct mlx5_core_qp *qp)
|
|||||||
MLX5_SET(destroy_qp_in, in, opcode, MLX5_CMD_OP_DESTROY_QP);
|
MLX5_SET(destroy_qp_in, in, opcode, MLX5_CMD_OP_DESTROY_QP);
|
||||||
MLX5_SET(destroy_qp_in, in, qpn, qp->qpn);
|
MLX5_SET(destroy_qp_in, in, qpn, qp->qpn);
|
||||||
MLX5_SET(destroy_qp_in, in, uid, qp->uid);
|
MLX5_SET(destroy_qp_in, in, uid, qp->uid);
|
||||||
mlx5_cmd_exec_in(dev->mdev, destroy_qp, in);
|
return mlx5_cmd_exec_in(dev->mdev, destroy_qp, in);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int mlx5_core_set_delay_drop(struct mlx5_ib_dev *dev,
|
int mlx5_core_set_delay_drop(struct mlx5_ib_dev *dev,
|
||||||
@@ -542,14 +541,14 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn)
|
|||||||
return mlx5_cmd_exec_in(dev->mdev, dealloc_xrcd, in);
|
return mlx5_cmd_exec_in(dev->mdev, dealloc_xrcd, in);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void destroy_rq_tracked(struct mlx5_ib_dev *dev, u32 rqn, u16 uid)
|
static int destroy_rq_tracked(struct mlx5_ib_dev *dev, u32 rqn, u16 uid)
|
||||||
{
|
{
|
||||||
u32 in[MLX5_ST_SZ_DW(destroy_rq_in)] = {};
|
u32 in[MLX5_ST_SZ_DW(destroy_rq_in)] = {};
|
||||||
|
|
||||||
MLX5_SET(destroy_rq_in, in, opcode, MLX5_CMD_OP_DESTROY_RQ);
|
MLX5_SET(destroy_rq_in, in, opcode, MLX5_CMD_OP_DESTROY_RQ);
|
||||||
MLX5_SET(destroy_rq_in, in, rqn, rqn);
|
MLX5_SET(destroy_rq_in, in, rqn, rqn);
|
||||||
MLX5_SET(destroy_rq_in, in, uid, uid);
|
MLX5_SET(destroy_rq_in, in, uid, uid);
|
||||||
mlx5_cmd_exec_in(dev->mdev, destroy_rq, in);
|
return mlx5_cmd_exec_in(dev->mdev, destroy_rq, in);
|
||||||
}
|
}
|
||||||
|
|
||||||
int mlx5_core_create_rq_tracked(struct mlx5_ib_dev *dev, u32 *in, int inlen,
|
int mlx5_core_create_rq_tracked(struct mlx5_ib_dev *dev, u32 *in, int inlen,
|
||||||
@@ -580,8 +579,7 @@ int mlx5_core_destroy_rq_tracked(struct mlx5_ib_dev *dev,
|
|||||||
struct mlx5_core_qp *rq)
|
struct mlx5_core_qp *rq)
|
||||||
{
|
{
|
||||||
destroy_resource_common(dev, rq);
|
destroy_resource_common(dev, rq);
|
||||||
destroy_rq_tracked(dev, rq->qpn, rq->uid);
|
return destroy_rq_tracked(dev, rq->qpn, rq->uid);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void destroy_sq_tracked(struct mlx5_ib_dev *dev, u32 sqn, u16 uid)
|
static void destroy_sq_tracked(struct mlx5_ib_dev *dev, u32 sqn, u16 uid)
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ void __iomem *mips_gic_base;
|
|||||||
|
|
||||||
static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
|
static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
|
||||||
|
|
||||||
static DEFINE_SPINLOCK(gic_lock);
|
static DEFINE_RAW_SPINLOCK(gic_lock);
|
||||||
static struct irq_domain *gic_irq_domain;
|
static struct irq_domain *gic_irq_domain;
|
||||||
static int gic_shared_intrs;
|
static int gic_shared_intrs;
|
||||||
static unsigned int gic_cpu_pin;
|
static unsigned int gic_cpu_pin;
|
||||||
@@ -209,7 +209,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
|
|
||||||
irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
|
irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
|
||||||
|
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
switch (type & IRQ_TYPE_SENSE_MASK) {
|
switch (type & IRQ_TYPE_SENSE_MASK) {
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
pol = GIC_POL_FALLING_EDGE;
|
pol = GIC_POL_FALLING_EDGE;
|
||||||
@@ -249,7 +249,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
else
|
else
|
||||||
irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
|
irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
|
||||||
handle_level_irq, NULL);
|
handle_level_irq, NULL);
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -267,7 +267,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* Assumption : cpumask refers to a single CPU */
|
/* Assumption : cpumask refers to a single CPU */
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
|
|
||||||
/* Re-route this IRQ */
|
/* Re-route this IRQ */
|
||||||
write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
|
write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
|
||||||
@@ -278,7 +278,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
|
|||||||
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
|
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
|
||||||
|
|
||||||
irq_data_update_effective_affinity(d, cpumask_of(cpu));
|
irq_data_update_effective_affinity(d, cpumask_of(cpu));
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
|
|
||||||
return IRQ_SET_MASK_OK;
|
return IRQ_SET_MASK_OK;
|
||||||
}
|
}
|
||||||
@@ -356,12 +356,12 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
|
|||||||
cd = irq_data_get_irq_chip_data(d);
|
cd = irq_data_get_irq_chip_data(d);
|
||||||
cd->mask = false;
|
cd->mask = false;
|
||||||
|
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||||
write_gic_vo_rmask(BIT(intr));
|
write_gic_vo_rmask(BIT(intr));
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
|
static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
|
||||||
@@ -374,32 +374,43 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
|
|||||||
cd = irq_data_get_irq_chip_data(d);
|
cd = irq_data_get_irq_chip_data(d);
|
||||||
cd->mask = true;
|
cd->mask = true;
|
||||||
|
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||||
write_gic_vo_smask(BIT(intr));
|
write_gic_vo_smask(BIT(intr));
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gic_all_vpes_irq_cpu_online(struct irq_data *d)
|
static void gic_all_vpes_irq_cpu_online(void)
|
||||||
{
|
{
|
||||||
struct gic_all_vpes_chip_data *cd;
|
static const unsigned int local_intrs[] = {
|
||||||
unsigned int intr;
|
GIC_LOCAL_INT_TIMER,
|
||||||
|
GIC_LOCAL_INT_PERFCTR,
|
||||||
|
GIC_LOCAL_INT_FDC,
|
||||||
|
};
|
||||||
|
unsigned long flags;
|
||||||
|
int i;
|
||||||
|
|
||||||
intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
cd = irq_data_get_irq_chip_data(d);
|
|
||||||
|
|
||||||
write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
|
for (i = 0; i < ARRAY_SIZE(local_intrs); i++) {
|
||||||
if (cd->mask)
|
unsigned int intr = local_intrs[i];
|
||||||
write_gic_vl_smask(BIT(intr));
|
struct gic_all_vpes_chip_data *cd;
|
||||||
|
|
||||||
|
cd = &gic_all_vpes_chip_data[intr];
|
||||||
|
write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
|
||||||
|
if (cd->mask)
|
||||||
|
write_gic_vl_smask(BIT(intr));
|
||||||
|
}
|
||||||
|
|
||||||
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip gic_all_vpes_local_irq_controller = {
|
static struct irq_chip gic_all_vpes_local_irq_controller = {
|
||||||
.name = "MIPS GIC Local",
|
.name = "MIPS GIC Local",
|
||||||
.irq_mask = gic_mask_local_irq_all_vpes,
|
.irq_mask = gic_mask_local_irq_all_vpes,
|
||||||
.irq_unmask = gic_unmask_local_irq_all_vpes,
|
.irq_unmask = gic_unmask_local_irq_all_vpes,
|
||||||
.irq_cpu_online = gic_all_vpes_irq_cpu_online,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
static void __gic_irq_dispatch(void)
|
static void __gic_irq_dispatch(void)
|
||||||
@@ -423,11 +434,11 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
|||||||
|
|
||||||
data = irq_get_irq_data(virq);
|
data = irq_get_irq_data(virq);
|
||||||
|
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
|
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
|
||||||
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
|
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
|
||||||
irq_data_update_effective_affinity(data, cpumask_of(cpu));
|
irq_data_update_effective_affinity(data, cpumask_of(cpu));
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -480,6 +491,10 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
|||||||
intr = GIC_HWIRQ_TO_LOCAL(hwirq);
|
intr = GIC_HWIRQ_TO_LOCAL(hwirq);
|
||||||
map = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin;
|
map = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If adding support for more per-cpu interrupts, keep the the
|
||||||
|
* array in gic_all_vpes_irq_cpu_online() in sync.
|
||||||
|
*/
|
||||||
switch (intr) {
|
switch (intr) {
|
||||||
case GIC_LOCAL_INT_TIMER:
|
case GIC_LOCAL_INT_TIMER:
|
||||||
/* CONFIG_MIPS_CMP workaround (see __gic_init) */
|
/* CONFIG_MIPS_CMP workaround (see __gic_init) */
|
||||||
@@ -518,12 +533,12 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
|||||||
if (!gic_local_irq_is_routable(intr))
|
if (!gic_local_irq_is_routable(intr))
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
spin_lock_irqsave(&gic_lock, flags);
|
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||||
write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
|
write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&gic_lock, flags);
|
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -710,8 +725,8 @@ static int gic_cpu_startup(unsigned int cpu)
|
|||||||
/* Clear all local IRQ masks (ie. disable all local interrupts) */
|
/* Clear all local IRQ masks (ie. disable all local interrupts) */
|
||||||
write_gic_vl_rmask(~0);
|
write_gic_vl_rmask(~0);
|
||||||
|
|
||||||
/* Invoke irq_cpu_online callbacks to enable desired interrupts */
|
/* Enable desired interrupts */
|
||||||
irq_cpu_online();
|
gic_all_vpes_irq_cpu_online();
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -539,15 +539,17 @@ static int load_requested_vpu(struct mtk_vpu *vpu,
|
|||||||
int vpu_load_firmware(struct platform_device *pdev)
|
int vpu_load_firmware(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct mtk_vpu *vpu;
|
struct mtk_vpu *vpu;
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev;
|
||||||
struct vpu_run *run;
|
struct vpu_run *run;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!pdev) {
|
if (!pdev) {
|
||||||
dev_err(dev, "VPU platform device is invalid\n");
|
pr_err("VPU platform device is invalid\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
dev = &pdev->dev;
|
||||||
|
|
||||||
vpu = platform_get_drvdata(pdev);
|
vpu = platform_get_drvdata(pdev);
|
||||||
run = &vpu->run;
|
run = &vpu->run;
|
||||||
|
|
||||||
|
|||||||
@@ -1991,14 +1991,14 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq,
|
|||||||
mmc_blk_urgent_bkops(mq, mqrq);
|
mmc_blk_urgent_bkops(mq, mqrq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
|
static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, enum mmc_issue_type issue_type)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
bool put_card;
|
bool put_card;
|
||||||
|
|
||||||
spin_lock_irqsave(&mq->lock, flags);
|
spin_lock_irqsave(&mq->lock, flags);
|
||||||
|
|
||||||
mq->in_flight[mmc_issue_type(mq, req)] -= 1;
|
mq->in_flight[issue_type] -= 1;
|
||||||
|
|
||||||
put_card = (mmc_tot_in_flight(mq) == 0);
|
put_card = (mmc_tot_in_flight(mq) == 0);
|
||||||
|
|
||||||
@@ -2010,6 +2010,7 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
|
|||||||
|
|
||||||
static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
|
static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
|
||||||
{
|
{
|
||||||
|
enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
|
||||||
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
|
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
|
||||||
struct mmc_request *mrq = &mqrq->brq.mrq;
|
struct mmc_request *mrq = &mqrq->brq.mrq;
|
||||||
struct mmc_host *host = mq->card->host;
|
struct mmc_host *host = mq->card->host;
|
||||||
@@ -2025,7 +2026,7 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
|
|||||||
else if (likely(!blk_should_fake_timeout(req->q)))
|
else if (likely(!blk_should_fake_timeout(req->q)))
|
||||||
blk_mq_complete_request(req);
|
blk_mq_complete_request(req);
|
||||||
|
|
||||||
mmc_blk_mq_dec_in_flight(mq, req);
|
mmc_blk_mq_dec_in_flight(mq, issue_type);
|
||||||
}
|
}
|
||||||
|
|
||||||
void mmc_blk_mq_recovery(struct mmc_queue *mq)
|
void mmc_blk_mq_recovery(struct mmc_queue *mq)
|
||||||
|
|||||||
@@ -514,6 +514,32 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
|
|||||||
|
|
||||||
EXPORT_SYMBOL(mmc_alloc_host);
|
EXPORT_SYMBOL(mmc_alloc_host);
|
||||||
|
|
||||||
|
static void devm_mmc_host_release(struct device *dev, void *res)
|
||||||
|
{
|
||||||
|
mmc_free_host(*(struct mmc_host **)res);
|
||||||
|
}
|
||||||
|
|
||||||
|
struct mmc_host *devm_mmc_alloc_host(struct device *dev, int extra)
|
||||||
|
{
|
||||||
|
struct mmc_host **dr, *host;
|
||||||
|
|
||||||
|
dr = devres_alloc(devm_mmc_host_release, sizeof(*dr), GFP_KERNEL);
|
||||||
|
if (!dr)
|
||||||
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
|
host = mmc_alloc_host(extra, dev);
|
||||||
|
if (IS_ERR(host)) {
|
||||||
|
devres_free(dr);
|
||||||
|
return host;
|
||||||
|
}
|
||||||
|
|
||||||
|
*dr = host;
|
||||||
|
devres_add(dev, dr);
|
||||||
|
|
||||||
|
return host;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL(devm_mmc_alloc_host);
|
||||||
|
|
||||||
static int mmc_validate_host_caps(struct mmc_host *host)
|
static int mmc_validate_host_caps(struct mmc_host *host)
|
||||||
{
|
{
|
||||||
if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
|
if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
|
||||||
|
|||||||
@@ -1413,8 +1413,8 @@ static int bcm2835_probe(struct platform_device *pdev)
|
|||||||
host->max_clk = clk_get_rate(clk);
|
host->max_clk = clk_get_rate(clk);
|
||||||
|
|
||||||
host->irq = platform_get_irq(pdev, 0);
|
host->irq = platform_get_irq(pdev, 0);
|
||||||
if (host->irq <= 0) {
|
if (host->irq < 0) {
|
||||||
ret = -EINVAL;
|
ret = host->irq;
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1122,7 +1122,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||||||
struct mmc_host *mmc;
|
struct mmc_host *mmc;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mmc = mmc_alloc_host(sizeof(struct meson_host), &pdev->dev);
|
mmc = devm_mmc_alloc_host(&pdev->dev, sizeof(struct meson_host));
|
||||||
if (!mmc)
|
if (!mmc)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
host = mmc_priv(mmc);
|
host = mmc_priv(mmc);
|
||||||
@@ -1138,46 +1138,33 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||||||
host->vqmmc_enabled = false;
|
host->vqmmc_enabled = false;
|
||||||
ret = mmc_regulator_get_supply(mmc);
|
ret = mmc_regulator_get_supply(mmc);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto free_host;
|
return ret;
|
||||||
|
|
||||||
ret = mmc_of_parse(mmc);
|
ret = mmc_of_parse(mmc);
|
||||||
if (ret) {
|
if (ret)
|
||||||
if (ret != -EPROBE_DEFER)
|
return dev_err_probe(&pdev->dev, ret, "error parsing DT\n");
|
||||||
dev_warn(&pdev->dev, "error parsing DT: %d\n", ret);
|
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
host->data = (struct meson_mmc_data *)
|
host->data = (struct meson_mmc_data *)
|
||||||
of_device_get_match_data(&pdev->dev);
|
of_device_get_match_data(&pdev->dev);
|
||||||
if (!host->data) {
|
if (!host->data)
|
||||||
ret = -EINVAL;
|
return -EINVAL;
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = device_reset_optional(&pdev->dev);
|
ret = device_reset_optional(&pdev->dev);
|
||||||
if (ret) {
|
if (ret)
|
||||||
dev_err_probe(&pdev->dev, ret, "device reset failed\n");
|
return dev_err_probe(&pdev->dev, ret, "device reset failed\n");
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||||
host->regs = devm_ioremap_resource(&pdev->dev, res);
|
host->regs = devm_ioremap_resource(&pdev->dev, res);
|
||||||
if (IS_ERR(host->regs)) {
|
if (IS_ERR(host->regs))
|
||||||
ret = PTR_ERR(host->regs);
|
return PTR_ERR(host->regs);
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
host->irq = platform_get_irq(pdev, 0);
|
host->irq = platform_get_irq(pdev, 0);
|
||||||
if (host->irq <= 0) {
|
if (host->irq < 0)
|
||||||
ret = -EINVAL;
|
return host->irq;
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
host->pinctrl = devm_pinctrl_get(&pdev->dev);
|
host->pinctrl = devm_pinctrl_get(&pdev->dev);
|
||||||
if (IS_ERR(host->pinctrl)) {
|
if (IS_ERR(host->pinctrl))
|
||||||
ret = PTR_ERR(host->pinctrl);
|
return PTR_ERR(host->pinctrl);
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
host->pins_clk_gate = pinctrl_lookup_state(host->pinctrl,
|
host->pins_clk_gate = pinctrl_lookup_state(host->pinctrl,
|
||||||
"clk-gate");
|
"clk-gate");
|
||||||
@@ -1188,14 +1175,12 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
host->core_clk = devm_clk_get(&pdev->dev, "core");
|
host->core_clk = devm_clk_get(&pdev->dev, "core");
|
||||||
if (IS_ERR(host->core_clk)) {
|
if (IS_ERR(host->core_clk))
|
||||||
ret = PTR_ERR(host->core_clk);
|
return PTR_ERR(host->core_clk);
|
||||||
goto free_host;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = clk_prepare_enable(host->core_clk);
|
ret = clk_prepare_enable(host->core_clk);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto free_host;
|
return ret;
|
||||||
|
|
||||||
ret = meson_mmc_clk_init(host);
|
ret = meson_mmc_clk_init(host);
|
||||||
if (ret)
|
if (ret)
|
||||||
@@ -1290,8 +1275,6 @@ err_init_clk:
|
|||||||
clk_disable_unprepare(host->mmc_clk);
|
clk_disable_unprepare(host->mmc_clk);
|
||||||
err_core_clk:
|
err_core_clk:
|
||||||
clk_disable_unprepare(host->core_clk);
|
clk_disable_unprepare(host->core_clk);
|
||||||
free_host:
|
|
||||||
mmc_free_host(mmc);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1315,7 +1298,6 @@ static int meson_mmc_remove(struct platform_device *pdev)
|
|||||||
clk_disable_unprepare(host->mmc_clk);
|
clk_disable_unprepare(host->mmc_clk);
|
||||||
clk_disable_unprepare(host->core_clk);
|
clk_disable_unprepare(host->core_clk);
|
||||||
|
|
||||||
mmc_free_host(host->mmc);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -26,9 +26,16 @@ struct f_sdhost_priv {
|
|||||||
bool enable_cmd_dat_delay;
|
bool enable_cmd_dat_delay;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static void *sdhci_f_sdhost_priv(struct sdhci_host *host)
|
||||||
|
{
|
||||||
|
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||||
|
|
||||||
|
return sdhci_pltfm_priv(pltfm_host);
|
||||||
|
}
|
||||||
|
|
||||||
static void sdhci_f_sdh30_soft_voltage_switch(struct sdhci_host *host)
|
static void sdhci_f_sdh30_soft_voltage_switch(struct sdhci_host *host)
|
||||||
{
|
{
|
||||||
struct f_sdhost_priv *priv = sdhci_priv(host);
|
struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
|
||||||
u32 ctrl = 0;
|
u32 ctrl = 0;
|
||||||
|
|
||||||
usleep_range(2500, 3000);
|
usleep_range(2500, 3000);
|
||||||
@@ -61,7 +68,7 @@ static unsigned int sdhci_f_sdh30_get_min_clock(struct sdhci_host *host)
|
|||||||
|
|
||||||
static void sdhci_f_sdh30_reset(struct sdhci_host *host, u8 mask)
|
static void sdhci_f_sdh30_reset(struct sdhci_host *host, u8 mask)
|
||||||
{
|
{
|
||||||
struct f_sdhost_priv *priv = sdhci_priv(host);
|
struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
|
||||||
u32 ctl;
|
u32 ctl;
|
||||||
|
|
||||||
if (sdhci_readw(host, SDHCI_CLOCK_CONTROL) == 0)
|
if (sdhci_readw(host, SDHCI_CLOCK_CONTROL) == 0)
|
||||||
@@ -85,30 +92,32 @@ static const struct sdhci_ops sdhci_f_sdh30_ops = {
|
|||||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct sdhci_pltfm_data sdhci_f_sdh30_pltfm_data = {
|
||||||
|
.ops = &sdhci_f_sdh30_ops,
|
||||||
|
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC
|
||||||
|
| SDHCI_QUIRK_INVERTED_WRITE_PROTECT,
|
||||||
|
.quirks2 = SDHCI_QUIRK2_SUPPORT_SINGLE
|
||||||
|
| SDHCI_QUIRK2_TUNING_WORK_AROUND,
|
||||||
|
};
|
||||||
|
|
||||||
static int sdhci_f_sdh30_probe(struct platform_device *pdev)
|
static int sdhci_f_sdh30_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct sdhci_host *host;
|
struct sdhci_host *host;
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
int irq, ctrl = 0, ret = 0;
|
int ctrl = 0, ret = 0;
|
||||||
struct f_sdhost_priv *priv;
|
struct f_sdhost_priv *priv;
|
||||||
|
struct sdhci_pltfm_host *pltfm_host;
|
||||||
u32 reg = 0;
|
u32 reg = 0;
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
host = sdhci_pltfm_init(pdev, &sdhci_f_sdh30_pltfm_data,
|
||||||
if (irq < 0)
|
sizeof(struct f_sdhost_priv));
|
||||||
return irq;
|
|
||||||
|
|
||||||
host = sdhci_alloc_host(dev, sizeof(struct f_sdhost_priv));
|
|
||||||
if (IS_ERR(host))
|
if (IS_ERR(host))
|
||||||
return PTR_ERR(host);
|
return PTR_ERR(host);
|
||||||
|
|
||||||
priv = sdhci_priv(host);
|
pltfm_host = sdhci_priv(host);
|
||||||
|
priv = sdhci_pltfm_priv(pltfm_host);
|
||||||
priv->dev = dev;
|
priv->dev = dev;
|
||||||
|
|
||||||
host->quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
|
|
||||||
SDHCI_QUIRK_INVERTED_WRITE_PROTECT;
|
|
||||||
host->quirks2 = SDHCI_QUIRK2_SUPPORT_SINGLE |
|
|
||||||
SDHCI_QUIRK2_TUNING_WORK_AROUND;
|
|
||||||
|
|
||||||
priv->enable_cmd_dat_delay = device_property_read_bool(dev,
|
priv->enable_cmd_dat_delay = device_property_read_bool(dev,
|
||||||
"fujitsu,cmd-dat-delay-select");
|
"fujitsu,cmd-dat-delay-select");
|
||||||
|
|
||||||
@@ -116,18 +125,6 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
|
|||||||
if (ret)
|
if (ret)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
platform_set_drvdata(pdev, host);
|
|
||||||
|
|
||||||
host->hw_name = "f_sdh30";
|
|
||||||
host->ops = &sdhci_f_sdh30_ops;
|
|
||||||
host->irq = irq;
|
|
||||||
|
|
||||||
host->ioaddr = devm_platform_ioremap_resource(pdev, 0);
|
|
||||||
if (IS_ERR(host->ioaddr)) {
|
|
||||||
ret = PTR_ERR(host->ioaddr);
|
|
||||||
goto err;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (dev_of_node(dev)) {
|
if (dev_of_node(dev)) {
|
||||||
sdhci_get_of_property(pdev);
|
sdhci_get_of_property(pdev);
|
||||||
|
|
||||||
@@ -182,23 +179,22 @@ err_add_host:
|
|||||||
err_clk:
|
err_clk:
|
||||||
clk_disable_unprepare(priv->clk_iface);
|
clk_disable_unprepare(priv->clk_iface);
|
||||||
err:
|
err:
|
||||||
sdhci_free_host(host);
|
sdhci_pltfm_free(pdev);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sdhci_f_sdh30_remove(struct platform_device *pdev)
|
static int sdhci_f_sdh30_remove(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct sdhci_host *host = platform_get_drvdata(pdev);
|
struct sdhci_host *host = platform_get_drvdata(pdev);
|
||||||
struct f_sdhost_priv *priv = sdhci_priv(host);
|
struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
|
||||||
|
struct clk *clk_iface = priv->clk_iface;
|
||||||
|
struct clk *clk = priv->clk;
|
||||||
|
|
||||||
sdhci_remove_host(host, readl(host->ioaddr + SDHCI_INT_STATUS) ==
|
sdhci_pltfm_unregister(pdev);
|
||||||
0xffffffff);
|
|
||||||
|
|
||||||
clk_disable_unprepare(priv->clk_iface);
|
clk_disable_unprepare(clk_iface);
|
||||||
clk_disable_unprepare(priv->clk);
|
clk_disable_unprepare(clk);
|
||||||
|
|
||||||
sdhci_free_host(host);
|
|
||||||
platform_set_drvdata(pdev, NULL);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1317,8 +1317,8 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
|
|||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
host->irq = platform_get_irq(pdev, 0);
|
host->irq = platform_get_irq(pdev, 0);
|
||||||
if (host->irq <= 0) {
|
if (host->irq < 0) {
|
||||||
ret = -EINVAL;
|
ret = host->irq;
|
||||||
goto error_disable_mmc;
|
goto error_disable_mmc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1710,8 +1710,6 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
|
|||||||
|
|
||||||
wbsd_release_resources(host);
|
wbsd_release_resources(host);
|
||||||
wbsd_free_mmc(dev);
|
wbsd_free_mmc(dev);
|
||||||
|
|
||||||
mmc_free_host(mmc);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -2310,6 +2310,14 @@ static void mv88e6xxx_hardware_reset(struct mv88e6xxx_chip *chip)
|
|||||||
|
|
||||||
/* If there is a GPIO connected to the reset pin, toggle it */
|
/* If there is a GPIO connected to the reset pin, toggle it */
|
||||||
if (gpiod) {
|
if (gpiod) {
|
||||||
|
/* If the switch has just been reset and not yet completed
|
||||||
|
* loading EEPROM, the reset may interrupt the I2C transaction
|
||||||
|
* mid-byte, causing the first EEPROM read after the reset
|
||||||
|
* from the wrong location resulting in the switch booting
|
||||||
|
* to wrong mode and inoperable.
|
||||||
|
*/
|
||||||
|
mv88e6xxx_g1_wait_eeprom_done(chip);
|
||||||
|
|
||||||
gpiod_set_value_cansleep(gpiod, 1);
|
gpiod_set_value_cansleep(gpiod, 1);
|
||||||
usleep_range(10000, 20000);
|
usleep_range(10000, 20000);
|
||||||
gpiod_set_value_cansleep(gpiod, 0);
|
gpiod_set_value_cansleep(gpiod, 0);
|
||||||
|
|||||||
@@ -210,11 +210,11 @@ read_nvm_exit:
|
|||||||
* @hw: pointer to the HW structure.
|
* @hw: pointer to the HW structure.
|
||||||
* @module_pointer: module pointer location in words from the NVM beginning
|
* @module_pointer: module pointer location in words from the NVM beginning
|
||||||
* @offset: offset in words from module start
|
* @offset: offset in words from module start
|
||||||
* @words: number of words to write
|
* @words: number of words to read
|
||||||
* @data: buffer with words to write to the Shadow RAM
|
* @data: buffer with words to read to the Shadow RAM
|
||||||
* @last_command: tells the AdminQ that this is the last command
|
* @last_command: tells the AdminQ that this is the last command
|
||||||
*
|
*
|
||||||
* Writes a 16 bit words buffer to the Shadow RAM using the admin command.
|
* Reads a 16 bit words buffer to the Shadow RAM using the admin command.
|
||||||
**/
|
**/
|
||||||
static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
|
static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
|
||||||
u8 module_pointer, u32 offset,
|
u8 module_pointer, u32 offset,
|
||||||
@@ -234,18 +234,18 @@ static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
|
|||||||
*/
|
*/
|
||||||
if ((offset + words) > hw->nvm.sr_size)
|
if ((offset + words) > hw->nvm.sr_size)
|
||||||
i40e_debug(hw, I40E_DEBUG_NVM,
|
i40e_debug(hw, I40E_DEBUG_NVM,
|
||||||
"NVM write error: offset %d beyond Shadow RAM limit %d\n",
|
"NVM read error: offset %d beyond Shadow RAM limit %d\n",
|
||||||
(offset + words), hw->nvm.sr_size);
|
(offset + words), hw->nvm.sr_size);
|
||||||
else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS)
|
else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS)
|
||||||
/* We can write only up to 4KB (one sector), in one AQ write */
|
/* We can read only up to 4KB (one sector), in one AQ write */
|
||||||
i40e_debug(hw, I40E_DEBUG_NVM,
|
i40e_debug(hw, I40E_DEBUG_NVM,
|
||||||
"NVM write fail error: tried to write %d words, limit is %d.\n",
|
"NVM read fail error: tried to read %d words, limit is %d.\n",
|
||||||
words, I40E_SR_SECTOR_SIZE_IN_WORDS);
|
words, I40E_SR_SECTOR_SIZE_IN_WORDS);
|
||||||
else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS)
|
else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS)
|
||||||
!= (offset / I40E_SR_SECTOR_SIZE_IN_WORDS))
|
!= (offset / I40E_SR_SECTOR_SIZE_IN_WORDS))
|
||||||
/* A single write cannot spread over two sectors */
|
/* A single read cannot spread over two sectors */
|
||||||
i40e_debug(hw, I40E_DEBUG_NVM,
|
i40e_debug(hw, I40E_DEBUG_NVM,
|
||||||
"NVM write error: cannot spread over two sectors in a single write offset=%d words=%d\n",
|
"NVM read error: cannot spread over two sectors in a single read offset=%d words=%d\n",
|
||||||
offset, words);
|
offset, words);
|
||||||
else
|
else
|
||||||
ret_code = i40e_aq_read_nvm(hw, module_pointer,
|
ret_code = i40e_aq_read_nvm(hw, module_pointer,
|
||||||
|
|||||||
@@ -89,7 +89,8 @@ static u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev,
|
|||||||
|
|
||||||
static u64 read_internal_timer(const struct cyclecounter *cc)
|
static u64 read_internal_timer(const struct cyclecounter *cc)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = container_of(cc, struct mlx5_clock, cycles);
|
struct mlx5_timer *timer = container_of(cc, struct mlx5_timer, cycles);
|
||||||
|
struct mlx5_clock *clock = container_of(timer, struct mlx5_clock, timer);
|
||||||
struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev,
|
struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev,
|
||||||
clock);
|
clock);
|
||||||
|
|
||||||
@@ -100,6 +101,7 @@ static void mlx5_update_clock_info_page(struct mlx5_core_dev *mdev)
|
|||||||
{
|
{
|
||||||
struct mlx5_ib_clock_info *clock_info = mdev->clock_info;
|
struct mlx5_ib_clock_info *clock_info = mdev->clock_info;
|
||||||
struct mlx5_clock *clock = &mdev->clock;
|
struct mlx5_clock *clock = &mdev->clock;
|
||||||
|
struct mlx5_timer *timer;
|
||||||
u32 sign;
|
u32 sign;
|
||||||
|
|
||||||
if (!clock_info)
|
if (!clock_info)
|
||||||
@@ -109,10 +111,11 @@ static void mlx5_update_clock_info_page(struct mlx5_core_dev *mdev)
|
|||||||
smp_store_mb(clock_info->sign,
|
smp_store_mb(clock_info->sign,
|
||||||
sign | MLX5_IB_CLOCK_INFO_KERNEL_UPDATING);
|
sign | MLX5_IB_CLOCK_INFO_KERNEL_UPDATING);
|
||||||
|
|
||||||
clock_info->cycles = clock->tc.cycle_last;
|
timer = &clock->timer;
|
||||||
clock_info->mult = clock->cycles.mult;
|
clock_info->cycles = timer->tc.cycle_last;
|
||||||
clock_info->nsec = clock->tc.nsec;
|
clock_info->mult = timer->cycles.mult;
|
||||||
clock_info->frac = clock->tc.frac;
|
clock_info->nsec = timer->tc.nsec;
|
||||||
|
clock_info->frac = timer->tc.frac;
|
||||||
|
|
||||||
smp_store_release(&clock_info->sign,
|
smp_store_release(&clock_info->sign,
|
||||||
sign + MLX5_IB_CLOCK_INFO_KERNEL_UPDATING * 2);
|
sign + MLX5_IB_CLOCK_INFO_KERNEL_UPDATING * 2);
|
||||||
@@ -151,28 +154,37 @@ static void mlx5_timestamp_overflow(struct work_struct *work)
|
|||||||
{
|
{
|
||||||
struct delayed_work *dwork = to_delayed_work(work);
|
struct delayed_work *dwork = to_delayed_work(work);
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
|
struct mlx5_timer *timer;
|
||||||
struct mlx5_clock *clock;
|
struct mlx5_clock *clock;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
clock = container_of(dwork, struct mlx5_clock, overflow_work);
|
timer = container_of(dwork, struct mlx5_timer, overflow_work);
|
||||||
|
clock = container_of(timer, struct mlx5_clock, timer);
|
||||||
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
|
|
||||||
|
if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
|
||||||
|
goto out;
|
||||||
|
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
timecounter_read(&clock->tc);
|
timecounter_read(&timer->tc);
|
||||||
mlx5_update_clock_info_page(mdev);
|
mlx5_update_clock_info_page(mdev);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
schedule_delayed_work(&clock->overflow_work, clock->overflow_period);
|
|
||||||
|
out:
|
||||||
|
schedule_delayed_work(&timer->overflow_work, timer->overflow_period);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mlx5_ptp_settime(struct ptp_clock_info *ptp, const struct timespec64 *ts)
|
static int mlx5_ptp_settime(struct ptp_clock_info *ptp, const struct timespec64 *ts)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
u64 ns = timespec64_to_ns(ts);
|
u64 ns = timespec64_to_ns(ts);
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
timecounter_init(&clock->tc, &clock->cycles, ns);
|
timecounter_init(&timer->tc, &timer->cycles, ns);
|
||||||
mlx5_update_clock_info_page(mdev);
|
mlx5_update_clock_info_page(mdev);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
|
|
||||||
@@ -183,6 +195,7 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
|
|||||||
struct ptp_system_timestamp *sts)
|
struct ptp_system_timestamp *sts)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u64 cycles, ns;
|
u64 cycles, ns;
|
||||||
@@ -190,7 +203,7 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
|
|||||||
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
cycles = mlx5_read_internal_timer(mdev, sts);
|
cycles = mlx5_read_internal_timer(mdev, sts);
|
||||||
ns = timecounter_cyc2time(&clock->tc, cycles);
|
ns = timecounter_cyc2time(&timer->tc, cycles);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
|
|
||||||
*ts = ns_to_timespec64(ns);
|
*ts = ns_to_timespec64(ns);
|
||||||
@@ -201,12 +214,13 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
|
|||||||
static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
timecounter_adjtime(&clock->tc, delta);
|
timecounter_adjtime(&timer->tc, delta);
|
||||||
mlx5_update_clock_info_page(mdev);
|
mlx5_update_clock_info_page(mdev);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
|
|
||||||
@@ -216,27 +230,27 @@ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
|||||||
static int mlx5_ptp_adjfreq(struct ptp_clock_info *ptp, s32 delta)
|
static int mlx5_ptp_adjfreq(struct ptp_clock_info *ptp, s32 delta)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int neg_adj = 0;
|
int neg_adj = 0;
|
||||||
u32 diff;
|
u32 diff;
|
||||||
u64 adj;
|
u64 adj;
|
||||||
|
|
||||||
|
|
||||||
if (delta < 0) {
|
if (delta < 0) {
|
||||||
neg_adj = 1;
|
neg_adj = 1;
|
||||||
delta = -delta;
|
delta = -delta;
|
||||||
}
|
}
|
||||||
|
|
||||||
adj = clock->nominal_c_mult;
|
adj = timer->nominal_c_mult;
|
||||||
adj *= delta;
|
adj *= delta;
|
||||||
diff = div_u64(adj, 1000000000ULL);
|
diff = div_u64(adj, 1000000000ULL);
|
||||||
|
|
||||||
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
timecounter_read(&clock->tc);
|
timecounter_read(&timer->tc);
|
||||||
clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
|
timer->cycles.mult = neg_adj ? timer->nominal_c_mult - diff :
|
||||||
clock->nominal_c_mult + diff;
|
timer->nominal_c_mult + diff;
|
||||||
mlx5_update_clock_info_page(mdev);
|
mlx5_update_clock_info_page(mdev);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
|
|
||||||
@@ -313,6 +327,7 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
|||||||
container_of(ptp, struct mlx5_clock, ptp_info);
|
container_of(ptp, struct mlx5_clock, ptp_info);
|
||||||
struct mlx5_core_dev *mdev =
|
struct mlx5_core_dev *mdev =
|
||||||
container_of(clock, struct mlx5_core_dev, clock);
|
container_of(clock, struct mlx5_core_dev, clock);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
|
u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
|
||||||
u64 nsec_now, nsec_delta, time_stamp = 0;
|
u64 nsec_now, nsec_delta, time_stamp = 0;
|
||||||
u64 cycles_now, cycles_delta;
|
u64 cycles_now, cycles_delta;
|
||||||
@@ -355,10 +370,10 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
|||||||
ns = timespec64_to_ns(&ts);
|
ns = timespec64_to_ns(&ts);
|
||||||
cycles_now = mlx5_read_internal_timer(mdev, NULL);
|
cycles_now = mlx5_read_internal_timer(mdev, NULL);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
nsec_now = timecounter_cyc2time(&clock->tc, cycles_now);
|
nsec_now = timecounter_cyc2time(&timer->tc, cycles_now);
|
||||||
nsec_delta = ns - nsec_now;
|
nsec_delta = ns - nsec_now;
|
||||||
cycles_delta = div64_u64(nsec_delta << clock->cycles.shift,
|
cycles_delta = div64_u64(nsec_delta << timer->cycles.shift,
|
||||||
clock->cycles.mult);
|
timer->cycles.mult);
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
time_stamp = cycles_now + cycles_delta;
|
time_stamp = cycles_now + cycles_delta;
|
||||||
field_select = MLX5_MTPPS_FS_PIN_MODE |
|
field_select = MLX5_MTPPS_FS_PIN_MODE |
|
||||||
@@ -541,6 +556,7 @@ static int mlx5_pps_event(struct notifier_block *nb,
|
|||||||
unsigned long type, void *data)
|
unsigned long type, void *data)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = mlx5_nb_cof(nb, struct mlx5_clock, pps_nb);
|
struct mlx5_clock *clock = mlx5_nb_cof(nb, struct mlx5_clock, pps_nb);
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
struct ptp_clock_event ptp_event;
|
struct ptp_clock_event ptp_event;
|
||||||
u64 cycles_now, cycles_delta;
|
u64 cycles_now, cycles_delta;
|
||||||
u64 nsec_now, nsec_delta, ns;
|
u64 nsec_now, nsec_delta, ns;
|
||||||
@@ -575,10 +591,10 @@ static int mlx5_pps_event(struct notifier_block *nb,
|
|||||||
ts.tv_nsec = 0;
|
ts.tv_nsec = 0;
|
||||||
ns = timespec64_to_ns(&ts);
|
ns = timespec64_to_ns(&ts);
|
||||||
write_seqlock_irqsave(&clock->lock, flags);
|
write_seqlock_irqsave(&clock->lock, flags);
|
||||||
nsec_now = timecounter_cyc2time(&clock->tc, cycles_now);
|
nsec_now = timecounter_cyc2time(&timer->tc, cycles_now);
|
||||||
nsec_delta = ns - nsec_now;
|
nsec_delta = ns - nsec_now;
|
||||||
cycles_delta = div64_u64(nsec_delta << clock->cycles.shift,
|
cycles_delta = div64_u64(nsec_delta << timer->cycles.shift,
|
||||||
clock->cycles.mult);
|
timer->cycles.mult);
|
||||||
clock->pps_info.start[pin] = cycles_now + cycles_delta;
|
clock->pps_info.start[pin] = cycles_now + cycles_delta;
|
||||||
write_sequnlock_irqrestore(&clock->lock, flags);
|
write_sequnlock_irqrestore(&clock->lock, flags);
|
||||||
schedule_work(&clock->pps_info.out_work);
|
schedule_work(&clock->pps_info.out_work);
|
||||||
@@ -591,29 +607,32 @@ static int mlx5_pps_event(struct notifier_block *nb,
|
|||||||
return NOTIFY_OK;
|
return NOTIFY_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
void mlx5_init_clock(struct mlx5_core_dev *mdev)
|
static void mlx5_timecounter_init(struct mlx5_core_dev *mdev)
|
||||||
{
|
{
|
||||||
struct mlx5_clock *clock = &mdev->clock;
|
struct mlx5_clock *clock = &mdev->clock;
|
||||||
u64 overflow_cycles;
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
u64 ns;
|
|
||||||
u64 frac = 0;
|
|
||||||
u32 dev_freq;
|
u32 dev_freq;
|
||||||
|
|
||||||
dev_freq = MLX5_CAP_GEN(mdev, device_frequency_khz);
|
dev_freq = MLX5_CAP_GEN(mdev, device_frequency_khz);
|
||||||
if (!dev_freq) {
|
timer->cycles.read = read_internal_timer;
|
||||||
mlx5_core_warn(mdev, "invalid device_frequency_khz, aborting HW clock init\n");
|
timer->cycles.shift = MLX5_CYCLES_SHIFT;
|
||||||
return;
|
timer->cycles.mult = clocksource_khz2mult(dev_freq,
|
||||||
}
|
timer->cycles.shift);
|
||||||
seqlock_init(&clock->lock);
|
timer->nominal_c_mult = timer->cycles.mult;
|
||||||
clock->cycles.read = read_internal_timer;
|
timer->cycles.mask = CLOCKSOURCE_MASK(41);
|
||||||
clock->cycles.shift = MLX5_CYCLES_SHIFT;
|
|
||||||
clock->cycles.mult = clocksource_khz2mult(dev_freq,
|
|
||||||
clock->cycles.shift);
|
|
||||||
clock->nominal_c_mult = clock->cycles.mult;
|
|
||||||
clock->cycles.mask = CLOCKSOURCE_MASK(41);
|
|
||||||
|
|
||||||
timecounter_init(&clock->tc, &clock->cycles,
|
timecounter_init(&timer->tc, &timer->cycles,
|
||||||
ktime_to_ns(ktime_get_real()));
|
ktime_to_ns(ktime_get_real()));
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mlx5_init_overflow_period(struct mlx5_clock *clock)
|
||||||
|
{
|
||||||
|
struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, clock);
|
||||||
|
struct mlx5_ib_clock_info *clock_info = mdev->clock_info;
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
|
u64 overflow_cycles;
|
||||||
|
u64 frac = 0;
|
||||||
|
u64 ns;
|
||||||
|
|
||||||
/* Calculate period in seconds to call the overflow watchdog - to make
|
/* Calculate period in seconds to call the overflow watchdog - to make
|
||||||
* sure counter is checked at least twice every wrap around.
|
* sure counter is checked at least twice every wrap around.
|
||||||
@@ -622,32 +641,63 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
|
|||||||
* multiplied by clock multiplier where the result doesn't exceed
|
* multiplied by clock multiplier where the result doesn't exceed
|
||||||
* 64bits.
|
* 64bits.
|
||||||
*/
|
*/
|
||||||
overflow_cycles = div64_u64(~0ULL >> 1, clock->cycles.mult);
|
overflow_cycles = div64_u64(~0ULL >> 1, timer->cycles.mult);
|
||||||
overflow_cycles = min(overflow_cycles, div_u64(clock->cycles.mask, 3));
|
overflow_cycles = min(overflow_cycles, div_u64(timer->cycles.mask, 3));
|
||||||
|
|
||||||
ns = cyclecounter_cyc2ns(&clock->cycles, overflow_cycles,
|
ns = cyclecounter_cyc2ns(&timer->cycles, overflow_cycles,
|
||||||
frac, &frac);
|
frac, &frac);
|
||||||
do_div(ns, NSEC_PER_SEC / HZ);
|
do_div(ns, NSEC_PER_SEC / HZ);
|
||||||
clock->overflow_period = ns;
|
timer->overflow_period = ns;
|
||||||
|
|
||||||
mdev->clock_info =
|
INIT_DELAYED_WORK(&timer->overflow_work, mlx5_timestamp_overflow);
|
||||||
(struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
|
if (timer->overflow_period)
|
||||||
if (mdev->clock_info) {
|
schedule_delayed_work(&timer->overflow_work, 0);
|
||||||
mdev->clock_info->nsec = clock->tc.nsec;
|
else
|
||||||
mdev->clock_info->cycles = clock->tc.cycle_last;
|
mlx5_core_warn(mdev,
|
||||||
mdev->clock_info->mask = clock->cycles.mask;
|
"invalid overflow period, overflow_work is not scheduled\n");
|
||||||
mdev->clock_info->mult = clock->nominal_c_mult;
|
|
||||||
mdev->clock_info->shift = clock->cycles.shift;
|
if (clock_info)
|
||||||
mdev->clock_info->frac = clock->tc.frac;
|
clock_info->overflow_period = timer->overflow_period;
|
||||||
mdev->clock_info->overflow_period = clock->overflow_period;
|
}
|
||||||
|
|
||||||
|
static void mlx5_init_clock_info(struct mlx5_core_dev *mdev)
|
||||||
|
{
|
||||||
|
struct mlx5_clock *clock = &mdev->clock;
|
||||||
|
struct mlx5_ib_clock_info *info;
|
||||||
|
struct mlx5_timer *timer;
|
||||||
|
|
||||||
|
mdev->clock_info = (struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
|
||||||
|
if (!mdev->clock_info) {
|
||||||
|
mlx5_core_warn(mdev, "Failed to allocate IB clock info page\n");
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
info = mdev->clock_info;
|
||||||
|
timer = &clock->timer;
|
||||||
|
|
||||||
|
info->nsec = timer->tc.nsec;
|
||||||
|
info->cycles = timer->tc.cycle_last;
|
||||||
|
info->mask = timer->cycles.mask;
|
||||||
|
info->mult = timer->nominal_c_mult;
|
||||||
|
info->shift = timer->cycles.shift;
|
||||||
|
info->frac = timer->tc.frac;
|
||||||
|
}
|
||||||
|
|
||||||
|
void mlx5_init_clock(struct mlx5_core_dev *mdev)
|
||||||
|
{
|
||||||
|
struct mlx5_clock *clock = &mdev->clock;
|
||||||
|
|
||||||
|
if (!MLX5_CAP_GEN(mdev, device_frequency_khz)) {
|
||||||
|
mlx5_core_warn(mdev, "invalid device_frequency_khz, aborting HW clock init\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
seqlock_init(&clock->lock);
|
||||||
|
|
||||||
|
mlx5_timecounter_init(mdev);
|
||||||
|
mlx5_init_clock_info(mdev);
|
||||||
|
mlx5_init_overflow_period(clock);
|
||||||
INIT_WORK(&clock->pps_info.out_work, mlx5_pps_out);
|
INIT_WORK(&clock->pps_info.out_work, mlx5_pps_out);
|
||||||
INIT_DELAYED_WORK(&clock->overflow_work, mlx5_timestamp_overflow);
|
|
||||||
if (clock->overflow_period)
|
|
||||||
schedule_delayed_work(&clock->overflow_work, 0);
|
|
||||||
else
|
|
||||||
mlx5_core_warn(mdev, "invalid overflow period, overflow_work is not scheduled\n");
|
|
||||||
|
|
||||||
/* Configure the PHC */
|
/* Configure the PHC */
|
||||||
clock->ptp_info = mlx5_ptp_clock_info;
|
clock->ptp_info = mlx5_ptp_clock_info;
|
||||||
@@ -684,7 +734,7 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
cancel_work_sync(&clock->pps_info.out_work);
|
cancel_work_sync(&clock->pps_info.out_work);
|
||||||
cancel_delayed_work_sync(&clock->overflow_work);
|
cancel_delayed_work_sync(&clock->timer.overflow_work);
|
||||||
|
|
||||||
if (mdev->clock_info) {
|
if (mdev->clock_info) {
|
||||||
free_page((unsigned long)mdev->clock_info);
|
free_page((unsigned long)mdev->clock_info);
|
||||||
|
|||||||
@@ -45,12 +45,13 @@ static inline int mlx5_clock_get_ptp_index(struct mlx5_core_dev *mdev)
|
|||||||
static inline ktime_t mlx5_timecounter_cyc2time(struct mlx5_clock *clock,
|
static inline ktime_t mlx5_timecounter_cyc2time(struct mlx5_clock *clock,
|
||||||
u64 timestamp)
|
u64 timestamp)
|
||||||
{
|
{
|
||||||
|
struct mlx5_timer *timer = &clock->timer;
|
||||||
unsigned int seq;
|
unsigned int seq;
|
||||||
u64 nsec;
|
u64 nsec;
|
||||||
|
|
||||||
do {
|
do {
|
||||||
seq = read_seqbegin(&clock->lock);
|
seq = read_seqbegin(&clock->lock);
|
||||||
nsec = timecounter_cyc2time(&clock->tc, timestamp);
|
nsec = timecounter_cyc2time(&timer->tc, timestamp);
|
||||||
} while (read_seqretry(&clock->lock, seq));
|
} while (read_seqretry(&clock->lock, seq));
|
||||||
|
|
||||||
return ns_to_ktime(nsec);
|
return ns_to_ktime(nsec);
|
||||||
|
|||||||
+55
-15
@@ -159,6 +159,19 @@ static struct macsec_rx_sa *macsec_rxsa_get(struct macsec_rx_sa __rcu *ptr)
|
|||||||
return sa;
|
return sa;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct macsec_rx_sa *macsec_active_rxsa_get(struct macsec_rx_sc *rx_sc)
|
||||||
|
{
|
||||||
|
struct macsec_rx_sa *sa = NULL;
|
||||||
|
int an;
|
||||||
|
|
||||||
|
for (an = 0; an < MACSEC_NUM_AN; an++) {
|
||||||
|
sa = macsec_rxsa_get(rx_sc->sa[an]);
|
||||||
|
if (sa)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return sa;
|
||||||
|
}
|
||||||
|
|
||||||
static void free_rx_sc_rcu(struct rcu_head *head)
|
static void free_rx_sc_rcu(struct rcu_head *head)
|
||||||
{
|
{
|
||||||
struct macsec_rx_sc *rx_sc = container_of(head, struct macsec_rx_sc, rcu_head);
|
struct macsec_rx_sc *rx_sc = container_of(head, struct macsec_rx_sc, rcu_head);
|
||||||
@@ -497,18 +510,28 @@ static void macsec_encrypt_finish(struct sk_buff *skb, struct net_device *dev)
|
|||||||
skb->protocol = eth_hdr(skb)->h_proto;
|
skb->protocol = eth_hdr(skb)->h_proto;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static unsigned int macsec_msdu_len(struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
struct macsec_dev *macsec = macsec_priv(skb->dev);
|
||||||
|
struct macsec_secy *secy = &macsec->secy;
|
||||||
|
bool sci_present = macsec_skb_cb(skb)->has_sci;
|
||||||
|
|
||||||
|
return skb->len - macsec_hdr_len(sci_present) - secy->icv_len;
|
||||||
|
}
|
||||||
|
|
||||||
static void macsec_count_tx(struct sk_buff *skb, struct macsec_tx_sc *tx_sc,
|
static void macsec_count_tx(struct sk_buff *skb, struct macsec_tx_sc *tx_sc,
|
||||||
struct macsec_tx_sa *tx_sa)
|
struct macsec_tx_sa *tx_sa)
|
||||||
{
|
{
|
||||||
|
unsigned int msdu_len = macsec_msdu_len(skb);
|
||||||
struct pcpu_tx_sc_stats *txsc_stats = this_cpu_ptr(tx_sc->stats);
|
struct pcpu_tx_sc_stats *txsc_stats = this_cpu_ptr(tx_sc->stats);
|
||||||
|
|
||||||
u64_stats_update_begin(&txsc_stats->syncp);
|
u64_stats_update_begin(&txsc_stats->syncp);
|
||||||
if (tx_sc->encrypt) {
|
if (tx_sc->encrypt) {
|
||||||
txsc_stats->stats.OutOctetsEncrypted += skb->len;
|
txsc_stats->stats.OutOctetsEncrypted += msdu_len;
|
||||||
txsc_stats->stats.OutPktsEncrypted++;
|
txsc_stats->stats.OutPktsEncrypted++;
|
||||||
this_cpu_inc(tx_sa->stats->OutPktsEncrypted);
|
this_cpu_inc(tx_sa->stats->OutPktsEncrypted);
|
||||||
} else {
|
} else {
|
||||||
txsc_stats->stats.OutOctetsProtected += skb->len;
|
txsc_stats->stats.OutOctetsProtected += msdu_len;
|
||||||
txsc_stats->stats.OutPktsProtected++;
|
txsc_stats->stats.OutPktsProtected++;
|
||||||
this_cpu_inc(tx_sa->stats->OutPktsProtected);
|
this_cpu_inc(tx_sa->stats->OutPktsProtected);
|
||||||
}
|
}
|
||||||
@@ -538,9 +561,10 @@ static void macsec_encrypt_done(struct crypto_async_request *base, int err)
|
|||||||
aead_request_free(macsec_skb_cb(skb)->req);
|
aead_request_free(macsec_skb_cb(skb)->req);
|
||||||
|
|
||||||
rcu_read_lock_bh();
|
rcu_read_lock_bh();
|
||||||
macsec_encrypt_finish(skb, dev);
|
|
||||||
macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
|
macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
|
||||||
len = skb->len;
|
/* packet is encrypted/protected so tx_bytes must be calculated */
|
||||||
|
len = macsec_msdu_len(skb) + 2 * ETH_ALEN;
|
||||||
|
macsec_encrypt_finish(skb, dev);
|
||||||
ret = dev_queue_xmit(skb);
|
ret = dev_queue_xmit(skb);
|
||||||
count_tx(dev, ret, len);
|
count_tx(dev, ret, len);
|
||||||
rcu_read_unlock_bh();
|
rcu_read_unlock_bh();
|
||||||
@@ -699,6 +723,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
|
|||||||
|
|
||||||
macsec_skb_cb(skb)->req = req;
|
macsec_skb_cb(skb)->req = req;
|
||||||
macsec_skb_cb(skb)->tx_sa = tx_sa;
|
macsec_skb_cb(skb)->tx_sa = tx_sa;
|
||||||
|
macsec_skb_cb(skb)->has_sci = sci_present;
|
||||||
aead_request_set_callback(req, 0, macsec_encrypt_done, skb);
|
aead_request_set_callback(req, 0, macsec_encrypt_done, skb);
|
||||||
|
|
||||||
dev_hold(skb->dev);
|
dev_hold(skb->dev);
|
||||||
@@ -740,15 +765,17 @@ static bool macsec_post_decrypt(struct sk_buff *skb, struct macsec_secy *secy, u
|
|||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
rxsc_stats->stats.InPktsLate++;
|
rxsc_stats->stats.InPktsLate++;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
|
DEV_STATS_INC(secy->netdev, rx_dropped);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (secy->validate_frames != MACSEC_VALIDATE_DISABLED) {
|
if (secy->validate_frames != MACSEC_VALIDATE_DISABLED) {
|
||||||
|
unsigned int msdu_len = macsec_msdu_len(skb);
|
||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
if (hdr->tci_an & MACSEC_TCI_E)
|
if (hdr->tci_an & MACSEC_TCI_E)
|
||||||
rxsc_stats->stats.InOctetsDecrypted += skb->len;
|
rxsc_stats->stats.InOctetsDecrypted += msdu_len;
|
||||||
else
|
else
|
||||||
rxsc_stats->stats.InOctetsValidated += skb->len;
|
rxsc_stats->stats.InOctetsValidated += msdu_len;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -761,6 +788,8 @@ static bool macsec_post_decrypt(struct sk_buff *skb, struct macsec_secy *secy, u
|
|||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
rxsc_stats->stats.InPktsNotValid++;
|
rxsc_stats->stats.InPktsNotValid++;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
|
this_cpu_inc(rx_sa->stats->InPktsNotValid);
|
||||||
|
DEV_STATS_INC(secy->netdev, rx_errors);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -853,9 +882,9 @@ static void macsec_decrypt_done(struct crypto_async_request *base, int err)
|
|||||||
|
|
||||||
macsec_finalize_skb(skb, macsec->secy.icv_len,
|
macsec_finalize_skb(skb, macsec->secy.icv_len,
|
||||||
macsec_extra_len(macsec_skb_cb(skb)->has_sci));
|
macsec_extra_len(macsec_skb_cb(skb)->has_sci));
|
||||||
|
len = skb->len;
|
||||||
macsec_reset_skb(skb, macsec->secy.netdev);
|
macsec_reset_skb(skb, macsec->secy.netdev);
|
||||||
|
|
||||||
len = skb->len;
|
|
||||||
if (gro_cells_receive(&macsec->gro_cells, skb) == NET_RX_SUCCESS)
|
if (gro_cells_receive(&macsec->gro_cells, skb) == NET_RX_SUCCESS)
|
||||||
count_rx(dev, len);
|
count_rx(dev, len);
|
||||||
|
|
||||||
@@ -1046,6 +1075,7 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb)
|
|||||||
u64_stats_update_begin(&secy_stats->syncp);
|
u64_stats_update_begin(&secy_stats->syncp);
|
||||||
secy_stats->stats.InPktsNoTag++;
|
secy_stats->stats.InPktsNoTag++;
|
||||||
u64_stats_update_end(&secy_stats->syncp);
|
u64_stats_update_end(&secy_stats->syncp);
|
||||||
|
DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1155,6 +1185,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
|||||||
u64_stats_update_begin(&secy_stats->syncp);
|
u64_stats_update_begin(&secy_stats->syncp);
|
||||||
secy_stats->stats.InPktsBadTag++;
|
secy_stats->stats.InPktsBadTag++;
|
||||||
u64_stats_update_end(&secy_stats->syncp);
|
u64_stats_update_end(&secy_stats->syncp);
|
||||||
|
DEV_STATS_INC(secy->netdev, rx_errors);
|
||||||
goto drop_nosa;
|
goto drop_nosa;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1165,11 +1196,15 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
|||||||
/* If validateFrames is Strict or the C bit in the
|
/* If validateFrames is Strict or the C bit in the
|
||||||
* SecTAG is set, discard
|
* SecTAG is set, discard
|
||||||
*/
|
*/
|
||||||
|
struct macsec_rx_sa *active_rx_sa = macsec_active_rxsa_get(rx_sc);
|
||||||
if (hdr->tci_an & MACSEC_TCI_C ||
|
if (hdr->tci_an & MACSEC_TCI_C ||
|
||||||
secy->validate_frames == MACSEC_VALIDATE_STRICT) {
|
secy->validate_frames == MACSEC_VALIDATE_STRICT) {
|
||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
rxsc_stats->stats.InPktsNotUsingSA++;
|
rxsc_stats->stats.InPktsNotUsingSA++;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
|
DEV_STATS_INC(secy->netdev, rx_errors);
|
||||||
|
if (active_rx_sa)
|
||||||
|
this_cpu_inc(active_rx_sa->stats->InPktsNotUsingSA);
|
||||||
goto drop_nosa;
|
goto drop_nosa;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1179,6 +1214,8 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
|||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
rxsc_stats->stats.InPktsUnusedSA++;
|
rxsc_stats->stats.InPktsUnusedSA++;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
|
if (active_rx_sa)
|
||||||
|
this_cpu_inc(active_rx_sa->stats->InPktsUnusedSA);
|
||||||
goto deliver;
|
goto deliver;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1199,6 +1236,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
|||||||
u64_stats_update_begin(&rxsc_stats->syncp);
|
u64_stats_update_begin(&rxsc_stats->syncp);
|
||||||
rxsc_stats->stats.InPktsLate++;
|
rxsc_stats->stats.InPktsLate++;
|
||||||
u64_stats_update_end(&rxsc_stats->syncp);
|
u64_stats_update_end(&rxsc_stats->syncp);
|
||||||
|
DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
|
||||||
goto drop;
|
goto drop;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1227,6 +1265,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
|||||||
deliver:
|
deliver:
|
||||||
macsec_finalize_skb(skb, secy->icv_len,
|
macsec_finalize_skb(skb, secy->icv_len,
|
||||||
macsec_extra_len(macsec_skb_cb(skb)->has_sci));
|
macsec_extra_len(macsec_skb_cb(skb)->has_sci));
|
||||||
|
len = skb->len;
|
||||||
macsec_reset_skb(skb, secy->netdev);
|
macsec_reset_skb(skb, secy->netdev);
|
||||||
|
|
||||||
if (rx_sa)
|
if (rx_sa)
|
||||||
@@ -1234,12 +1273,11 @@ deliver:
|
|||||||
macsec_rxsc_put(rx_sc);
|
macsec_rxsc_put(rx_sc);
|
||||||
|
|
||||||
skb_orphan(skb);
|
skb_orphan(skb);
|
||||||
len = skb->len;
|
|
||||||
ret = gro_cells_receive(&macsec->gro_cells, skb);
|
ret = gro_cells_receive(&macsec->gro_cells, skb);
|
||||||
if (ret == NET_RX_SUCCESS)
|
if (ret == NET_RX_SUCCESS)
|
||||||
count_rx(dev, len);
|
count_rx(dev, len);
|
||||||
else
|
else
|
||||||
macsec->secy.netdev->stats.rx_dropped++;
|
DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
|
||||||
|
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
|
|
||||||
@@ -1276,6 +1314,7 @@ nosci:
|
|||||||
u64_stats_update_begin(&secy_stats->syncp);
|
u64_stats_update_begin(&secy_stats->syncp);
|
||||||
secy_stats->stats.InPktsNoSCI++;
|
secy_stats->stats.InPktsNoSCI++;
|
||||||
u64_stats_update_end(&secy_stats->syncp);
|
u64_stats_update_end(&secy_stats->syncp);
|
||||||
|
DEV_STATS_INC(macsec->secy.netdev, rx_errors);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1294,7 +1333,7 @@ nosci:
|
|||||||
secy_stats->stats.InPktsUnknownSCI++;
|
secy_stats->stats.InPktsUnknownSCI++;
|
||||||
u64_stats_update_end(&secy_stats->syncp);
|
u64_stats_update_end(&secy_stats->syncp);
|
||||||
} else {
|
} else {
|
||||||
macsec->secy.netdev->stats.rx_dropped++;
|
DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -3403,21 +3442,21 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
|
|||||||
|
|
||||||
if (!secy->operational) {
|
if (!secy->operational) {
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
dev->stats.tx_dropped++;
|
DEV_STATS_INC(dev, tx_dropped);
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
len = skb->len;
|
||||||
skb = macsec_encrypt(skb, dev);
|
skb = macsec_encrypt(skb, dev);
|
||||||
if (IS_ERR(skb)) {
|
if (IS_ERR(skb)) {
|
||||||
if (PTR_ERR(skb) != -EINPROGRESS)
|
if (PTR_ERR(skb) != -EINPROGRESS)
|
||||||
dev->stats.tx_dropped++;
|
DEV_STATS_INC(dev, tx_dropped);
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
|
macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
|
||||||
|
|
||||||
macsec_encrypt_finish(skb, dev);
|
macsec_encrypt_finish(skb, dev);
|
||||||
len = skb->len;
|
|
||||||
ret = dev_queue_xmit(skb);
|
ret = dev_queue_xmit(skb);
|
||||||
count_tx(dev, ret, len);
|
count_tx(dev, ret, len);
|
||||||
return ret;
|
return ret;
|
||||||
@@ -3646,8 +3685,9 @@ static void macsec_get_stats64(struct net_device *dev,
|
|||||||
|
|
||||||
dev_fetch_sw_netstats(s, dev->tstats);
|
dev_fetch_sw_netstats(s, dev->tstats);
|
||||||
|
|
||||||
s->rx_dropped = dev->stats.rx_dropped;
|
s->rx_dropped = atomic_long_read(&dev->stats.__rx_dropped);
|
||||||
s->tx_dropped = dev->stats.tx_dropped;
|
s->tx_dropped = atomic_long_read(&dev->stats.__tx_dropped);
|
||||||
|
s->rx_errors = atomic_long_read(&dev->stats.__rx_errors);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int macsec_get_iflink(const struct net_device *dev)
|
static int macsec_get_iflink(const struct net_device *dev)
|
||||||
|
|||||||
@@ -404,6 +404,17 @@ static int bcm54xx_resume(struct phy_device *phydev)
|
|||||||
return bcm54xx_config_init(phydev);
|
return bcm54xx_config_init(phydev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int bcm54810_read_mmd(struct phy_device *phydev, int devnum, u16 regnum)
|
||||||
|
{
|
||||||
|
return -EOPNOTSUPP;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int bcm54810_write_mmd(struct phy_device *phydev, int devnum, u16 regnum,
|
||||||
|
u16 val)
|
||||||
|
{
|
||||||
|
return -EOPNOTSUPP;
|
||||||
|
}
|
||||||
|
|
||||||
static int bcm54811_config_init(struct phy_device *phydev)
|
static int bcm54811_config_init(struct phy_device *phydev)
|
||||||
{
|
{
|
||||||
int err, reg;
|
int err, reg;
|
||||||
@@ -841,6 +852,8 @@ static struct phy_driver broadcom_drivers[] = {
|
|||||||
.phy_id_mask = 0xfffffff0,
|
.phy_id_mask = 0xfffffff0,
|
||||||
.name = "Broadcom BCM54810",
|
.name = "Broadcom BCM54810",
|
||||||
/* PHY_GBIT_FEATURES */
|
/* PHY_GBIT_FEATURES */
|
||||||
|
.read_mmd = bcm54810_read_mmd,
|
||||||
|
.write_mmd = bcm54810_write_mmd,
|
||||||
.config_init = bcm54xx_config_init,
|
.config_init = bcm54xx_config_init,
|
||||||
.config_aneg = bcm5481_config_aneg,
|
.config_aneg = bcm5481_config_aneg,
|
||||||
.ack_interrupt = bcm_phy_ack_intr,
|
.ack_interrupt = bcm_phy_ack_intr,
|
||||||
|
|||||||
@@ -2195,7 +2195,9 @@ static void team_setup(struct net_device *dev)
|
|||||||
|
|
||||||
dev->hw_features = TEAM_VLAN_FEATURES |
|
dev->hw_features = TEAM_VLAN_FEATURES |
|
||||||
NETIF_F_HW_VLAN_CTAG_RX |
|
NETIF_F_HW_VLAN_CTAG_RX |
|
||||||
NETIF_F_HW_VLAN_CTAG_FILTER;
|
NETIF_F_HW_VLAN_CTAG_FILTER |
|
||||||
|
NETIF_F_HW_VLAN_STAG_RX |
|
||||||
|
NETIF_F_HW_VLAN_STAG_FILTER;
|
||||||
|
|
||||||
dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
|
dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
|
||||||
dev->features |= dev->hw_features;
|
dev->features |= dev->hw_features;
|
||||||
|
|||||||
@@ -3223,8 +3223,6 @@ static int virtnet_probe(struct virtio_device *vdev)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
_virtnet_set_queues(vi, vi->curr_queue_pairs);
|
|
||||||
|
|
||||||
/* serialize netdev register + virtio_device_ready() with ndo_open() */
|
/* serialize netdev register + virtio_device_ready() with ndo_open() */
|
||||||
rtnl_lock();
|
rtnl_lock();
|
||||||
|
|
||||||
@@ -3237,6 +3235,8 @@ static int virtnet_probe(struct virtio_device *vdev)
|
|||||||
|
|
||||||
virtio_device_ready(vdev);
|
virtio_device_ready(vdev);
|
||||||
|
|
||||||
|
_virtnet_set_queues(vi, vi->curr_queue_pairs);
|
||||||
|
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
|
|
||||||
err = virtnet_cpu_notif_add(vi);
|
err = virtnet_cpu_notif_add(vi);
|
||||||
|
|||||||
@@ -239,6 +239,7 @@
|
|||||||
#define EP_STATE_ENABLED 1
|
#define EP_STATE_ENABLED 1
|
||||||
|
|
||||||
static const unsigned int pcie_gen_freq[] = {
|
static const unsigned int pcie_gen_freq[] = {
|
||||||
|
GEN1_CORE_CLK_FREQ, /* PCI_EXP_LNKSTA_CLS == 0; undefined */
|
||||||
GEN1_CORE_CLK_FREQ,
|
GEN1_CORE_CLK_FREQ,
|
||||||
GEN2_CORE_CLK_FREQ,
|
GEN2_CORE_CLK_FREQ,
|
||||||
GEN3_CORE_CLK_FREQ,
|
GEN3_CORE_CLK_FREQ,
|
||||||
@@ -470,7 +471,11 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
|
|||||||
|
|
||||||
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
|
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
|
||||||
PCI_EXP_LNKSTA_CLS;
|
PCI_EXP_LNKSTA_CLS;
|
||||||
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
|
|
||||||
|
if (speed >= ARRAY_SIZE(pcie_gen_freq))
|
||||||
|
speed = 0;
|
||||||
|
|
||||||
|
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed]);
|
||||||
|
|
||||||
/* If EP doesn't advertise L1SS, just return */
|
/* If EP doesn't advertise L1SS, just return */
|
||||||
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
|
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
|
||||||
@@ -973,7 +978,11 @@ static int tegra_pcie_dw_host_init(struct pcie_port *pp)
|
|||||||
|
|
||||||
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
|
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
|
||||||
PCI_EXP_LNKSTA_CLS;
|
PCI_EXP_LNKSTA_CLS;
|
||||||
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
|
|
||||||
|
if (speed >= ARRAY_SIZE(pcie_gen_freq))
|
||||||
|
speed = 0;
|
||||||
|
|
||||||
|
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed]);
|
||||||
|
|
||||||
tegra_pcie_enable_interrupts(pp);
|
tegra_pcie_enable_interrupts(pp);
|
||||||
|
|
||||||
|
|||||||
@@ -1053,6 +1053,8 @@ static void nonstatic_release_resource_db(struct pcmcia_socket *s)
|
|||||||
q = p->next;
|
q = p->next;
|
||||||
kfree(p);
|
kfree(p);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
kfree(data);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
+2
-1
@@ -2159,12 +2159,13 @@ static void gsm_error(struct gsm_mux *gsm,
|
|||||||
static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
|
static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
struct gsm_dlci *dlci = gsm->dlci[0];
|
struct gsm_dlci *dlci;
|
||||||
struct gsm_msg *txq, *ntxq;
|
struct gsm_msg *txq, *ntxq;
|
||||||
|
|
||||||
gsm->dead = true;
|
gsm->dead = true;
|
||||||
mutex_lock(&gsm->mutex);
|
mutex_lock(&gsm->mutex);
|
||||||
|
|
||||||
|
dlci = gsm->dlci[0];
|
||||||
if (dlci) {
|
if (dlci) {
|
||||||
if (disc && dlci->state != DLCI_CLOSED) {
|
if (disc && dlci->state != DLCI_CLOSED) {
|
||||||
gsm_dlci_begin_close(dlci);
|
gsm_dlci_begin_close(dlci);
|
||||||
|
|||||||
@@ -3233,6 +3233,7 @@ void serial8250_init_port(struct uart_8250_port *up)
|
|||||||
struct uart_port *port = &up->port;
|
struct uart_port *port = &up->port;
|
||||||
|
|
||||||
spin_lock_init(&port->lock);
|
spin_lock_init(&port->lock);
|
||||||
|
port->pm = NULL;
|
||||||
port->ops = &serial8250_pops;
|
port->ops = &serial8250_pops;
|
||||||
port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
|
port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
|
||||||
|
|
||||||
|
|||||||
@@ -1062,8 +1062,8 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
|
|||||||
unsigned long sr = lpuart32_read(&sport->port, UARTSTAT);
|
unsigned long sr = lpuart32_read(&sport->port, UARTSTAT);
|
||||||
|
|
||||||
if (sr & (UARTSTAT_PE | UARTSTAT_FE)) {
|
if (sr & (UARTSTAT_PE | UARTSTAT_FE)) {
|
||||||
/* Read DR to clear the error flags */
|
/* Clear the error flags */
|
||||||
lpuart32_read(&sport->port, UARTDATA);
|
lpuart32_write(&sport->port, sr, UARTSTAT);
|
||||||
|
|
||||||
if (sr & UARTSTAT_PE)
|
if (sr & UARTSTAT_PE)
|
||||||
sport->port.icount.parity++;
|
sport->port.icount.parity++;
|
||||||
|
|||||||
@@ -2041,7 +2041,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
|
|||||||
u8 mult = 0;
|
u8 mult = 0;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
buffering = CDNS3_EP_BUF_SIZE - 1;
|
buffering = priv_dev->ep_buf_size - 1;
|
||||||
|
|
||||||
cdns3_configure_dmult(priv_dev, priv_ep);
|
cdns3_configure_dmult(priv_dev, priv_ep);
|
||||||
|
|
||||||
@@ -2060,7 +2060,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
|
|||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_ISOC);
|
ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_ISOC);
|
||||||
mult = CDNS3_EP_ISO_HS_MULT - 1;
|
mult = priv_dev->ep_iso_burst - 1;
|
||||||
buffering = mult + 1;
|
buffering = mult + 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2076,14 +2076,14 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
|
|||||||
mult = 0;
|
mult = 0;
|
||||||
max_packet_size = 1024;
|
max_packet_size = 1024;
|
||||||
if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
|
if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
|
||||||
maxburst = CDNS3_EP_ISO_SS_BURST - 1;
|
maxburst = priv_dev->ep_iso_burst - 1;
|
||||||
buffering = (mult + 1) *
|
buffering = (mult + 1) *
|
||||||
(maxburst + 1);
|
(maxburst + 1);
|
||||||
|
|
||||||
if (priv_ep->interval > 1)
|
if (priv_ep->interval > 1)
|
||||||
buffering++;
|
buffering++;
|
||||||
} else {
|
} else {
|
||||||
maxburst = CDNS3_EP_BUF_SIZE - 1;
|
maxburst = priv_dev->ep_buf_size - 1;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@@ -2098,6 +2098,23 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
|
|||||||
else
|
else
|
||||||
priv_ep->trb_burst_size = 16;
|
priv_ep->trb_burst_size = 16;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* In versions preceding DEV_VER_V2, for example, iMX8QM, there exit the bugs
|
||||||
|
* in the DMA. These bugs occur when the trb_burst_size exceeds 16 and the
|
||||||
|
* address is not aligned to 128 Bytes (which is a product of the 64-bit AXI
|
||||||
|
* and AXI maximum burst length of 16 or 0xF+1, dma_axi_ctrl0[3:0]). This
|
||||||
|
* results in data corruption when it crosses the 4K border. The corruption
|
||||||
|
* specifically occurs from the position (4K - (address & 0x7F)) to 4K.
|
||||||
|
*
|
||||||
|
* So force trb_burst_size to 16 at such platform.
|
||||||
|
*/
|
||||||
|
if (priv_dev->dev_ver < DEV_VER_V2)
|
||||||
|
priv_ep->trb_burst_size = 16;
|
||||||
|
|
||||||
|
mult = min_t(u8, mult, EP_CFG_MULT_MAX);
|
||||||
|
buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);
|
||||||
|
maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
|
||||||
|
|
||||||
/* onchip buffer is only allocated before configuration */
|
/* onchip buffer is only allocated before configuration */
|
||||||
if (!priv_dev->hw_configured_flag) {
|
if (!priv_dev->hw_configured_flag) {
|
||||||
ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1,
|
ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1,
|
||||||
@@ -2971,6 +2988,40 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* cdns3_gadget_check_config - ensure cdns3 can support the USB configuration
|
||||||
|
* @gadget: pointer to the USB gadget
|
||||||
|
*
|
||||||
|
* Used to record the maximum number of endpoints being used in a USB composite
|
||||||
|
* device. (across all configurations) This is to be used in the calculation
|
||||||
|
* of the TXFIFO sizes when resizing internal memory for individual endpoints.
|
||||||
|
* It will help ensured that the resizing logic reserves enough space for at
|
||||||
|
* least one max packet.
|
||||||
|
*/
|
||||||
|
static int cdns3_gadget_check_config(struct usb_gadget *gadget)
|
||||||
|
{
|
||||||
|
struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget);
|
||||||
|
struct usb_ep *ep;
|
||||||
|
int n_in = 0;
|
||||||
|
int total;
|
||||||
|
|
||||||
|
list_for_each_entry(ep, &gadget->ep_list, ep_list) {
|
||||||
|
if (ep->claimed && (ep->address & USB_DIR_IN))
|
||||||
|
n_in++;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* 2KB are reserved for EP0, 1KB for out*/
|
||||||
|
total = 2 + n_in + 1;
|
||||||
|
|
||||||
|
if (total > priv_dev->onchip_buffers)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
priv_dev->ep_buf_size = priv_dev->ep_iso_burst =
|
||||||
|
(priv_dev->onchip_buffers - 2) / (n_in + 1);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static const struct usb_gadget_ops cdns3_gadget_ops = {
|
static const struct usb_gadget_ops cdns3_gadget_ops = {
|
||||||
.get_frame = cdns3_gadget_get_frame,
|
.get_frame = cdns3_gadget_get_frame,
|
||||||
.wakeup = cdns3_gadget_wakeup,
|
.wakeup = cdns3_gadget_wakeup,
|
||||||
@@ -2979,6 +3030,7 @@ static const struct usb_gadget_ops cdns3_gadget_ops = {
|
|||||||
.udc_start = cdns3_gadget_udc_start,
|
.udc_start = cdns3_gadget_udc_start,
|
||||||
.udc_stop = cdns3_gadget_udc_stop,
|
.udc_stop = cdns3_gadget_udc_stop,
|
||||||
.match_ep = cdns3_gadget_match_ep,
|
.match_ep = cdns3_gadget_match_ep,
|
||||||
|
.check_config = cdns3_gadget_check_config,
|
||||||
};
|
};
|
||||||
|
|
||||||
static void cdns3_free_all_eps(struct cdns3_device *priv_dev)
|
static void cdns3_free_all_eps(struct cdns3_device *priv_dev)
|
||||||
|
|||||||
@@ -561,15 +561,18 @@ struct cdns3_usb_regs {
|
|||||||
/* Max burst size (used only in SS mode). */
|
/* Max burst size (used only in SS mode). */
|
||||||
#define EP_CFG_MAXBURST_MASK GENMASK(11, 8)
|
#define EP_CFG_MAXBURST_MASK GENMASK(11, 8)
|
||||||
#define EP_CFG_MAXBURST(p) (((p) << 8) & EP_CFG_MAXBURST_MASK)
|
#define EP_CFG_MAXBURST(p) (((p) << 8) & EP_CFG_MAXBURST_MASK)
|
||||||
|
#define EP_CFG_MAXBURST_MAX 15
|
||||||
/* ISO max burst. */
|
/* ISO max burst. */
|
||||||
#define EP_CFG_MULT_MASK GENMASK(15, 14)
|
#define EP_CFG_MULT_MASK GENMASK(15, 14)
|
||||||
#define EP_CFG_MULT(p) (((p) << 14) & EP_CFG_MULT_MASK)
|
#define EP_CFG_MULT(p) (((p) << 14) & EP_CFG_MULT_MASK)
|
||||||
|
#define EP_CFG_MULT_MAX 2
|
||||||
/* ISO max burst. */
|
/* ISO max burst. */
|
||||||
#define EP_CFG_MAXPKTSIZE_MASK GENMASK(26, 16)
|
#define EP_CFG_MAXPKTSIZE_MASK GENMASK(26, 16)
|
||||||
#define EP_CFG_MAXPKTSIZE(p) (((p) << 16) & EP_CFG_MAXPKTSIZE_MASK)
|
#define EP_CFG_MAXPKTSIZE(p) (((p) << 16) & EP_CFG_MAXPKTSIZE_MASK)
|
||||||
/* Max number of buffered packets. */
|
/* Max number of buffered packets. */
|
||||||
#define EP_CFG_BUFFERING_MASK GENMASK(31, 27)
|
#define EP_CFG_BUFFERING_MASK GENMASK(31, 27)
|
||||||
#define EP_CFG_BUFFERING(p) (((p) << 27) & EP_CFG_BUFFERING_MASK)
|
#define EP_CFG_BUFFERING(p) (((p) << 27) & EP_CFG_BUFFERING_MASK)
|
||||||
|
#define EP_CFG_BUFFERING_MAX 15
|
||||||
|
|
||||||
/* EP_CMD - bitmasks */
|
/* EP_CMD - bitmasks */
|
||||||
/* Endpoint reset. */
|
/* Endpoint reset. */
|
||||||
@@ -1093,9 +1096,6 @@ struct cdns3_trb {
|
|||||||
#define CDNS3_ENDPOINTS_MAX_COUNT 32
|
#define CDNS3_ENDPOINTS_MAX_COUNT 32
|
||||||
#define CDNS3_EP_ZLP_BUF_SIZE 1024
|
#define CDNS3_EP_ZLP_BUF_SIZE 1024
|
||||||
|
|
||||||
#define CDNS3_EP_BUF_SIZE 4 /* KB */
|
|
||||||
#define CDNS3_EP_ISO_HS_MULT 3
|
|
||||||
#define CDNS3_EP_ISO_SS_BURST 3
|
|
||||||
#define CDNS3_MAX_NUM_DESCMISS_BUF 32
|
#define CDNS3_MAX_NUM_DESCMISS_BUF 32
|
||||||
#define CDNS3_DESCMIS_BUF_SIZE 2048 /* Bytes */
|
#define CDNS3_DESCMIS_BUF_SIZE 2048 /* Bytes */
|
||||||
#define CDNS3_WA2_NUM_BUFFERS 128
|
#define CDNS3_WA2_NUM_BUFFERS 128
|
||||||
@@ -1330,6 +1330,9 @@ struct cdns3_device {
|
|||||||
/*in KB */
|
/*in KB */
|
||||||
u16 onchip_buffers;
|
u16 onchip_buffers;
|
||||||
u16 onchip_used_size;
|
u16 onchip_used_size;
|
||||||
|
|
||||||
|
u16 ep_buf_size;
|
||||||
|
u16 ep_iso_burst;
|
||||||
};
|
};
|
||||||
|
|
||||||
void cdns3_set_register_bit(void __iomem *ptr, u32 mask);
|
void cdns3_set_register_bit(void __iomem *ptr, u32 mask);
|
||||||
|
|||||||
@@ -70,6 +70,10 @@ static const struct ci_hdrc_imx_platform_flag imx7ulp_usb_data = {
|
|||||||
CI_HDRC_PMQOS,
|
CI_HDRC_PMQOS,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct ci_hdrc_imx_platform_flag imx8ulp_usb_data = {
|
||||||
|
.flags = CI_HDRC_SUPPORTS_RUNTIME_PM,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
|
static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
|
||||||
{ .compatible = "fsl,imx23-usb", .data = &imx23_usb_data},
|
{ .compatible = "fsl,imx23-usb", .data = &imx23_usb_data},
|
||||||
{ .compatible = "fsl,imx28-usb", .data = &imx28_usb_data},
|
{ .compatible = "fsl,imx28-usb", .data = &imx28_usb_data},
|
||||||
@@ -80,6 +84,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
|
|||||||
{ .compatible = "fsl,imx6ul-usb", .data = &imx6ul_usb_data},
|
{ .compatible = "fsl,imx6ul-usb", .data = &imx6ul_usb_data},
|
||||||
{ .compatible = "fsl,imx7d-usb", .data = &imx7d_usb_data},
|
{ .compatible = "fsl,imx7d-usb", .data = &imx7d_usb_data},
|
||||||
{ .compatible = "fsl,imx7ulp-usb", .data = &imx7ulp_usb_data},
|
{ .compatible = "fsl,imx7ulp-usb", .data = &imx7ulp_usb_data},
|
||||||
|
{ .compatible = "fsl,imx8ulp-usb", .data = &imx8ulp_usb_data},
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
|
MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
|
||||||
|
|||||||
@@ -135,7 +135,7 @@
|
|||||||
#define TXVREFTUNE0_MASK (0xf << 20)
|
#define TXVREFTUNE0_MASK (0xf << 20)
|
||||||
|
|
||||||
#define MX6_USB_OTG_WAKEUP_BITS (MX6_BM_WAKEUP_ENABLE | MX6_BM_VBUS_WAKEUP | \
|
#define MX6_USB_OTG_WAKEUP_BITS (MX6_BM_WAKEUP_ENABLE | MX6_BM_VBUS_WAKEUP | \
|
||||||
MX6_BM_ID_WAKEUP)
|
MX6_BM_ID_WAKEUP | MX6SX_BM_DPDM_WAKEUP_EN)
|
||||||
|
|
||||||
struct usbmisc_ops {
|
struct usbmisc_ops {
|
||||||
/* It's called once when probe a usb device */
|
/* It's called once when probe a usb device */
|
||||||
|
|||||||
@@ -306,7 +306,16 @@ static void dwc3_qcom_interconnect_exit(struct dwc3_qcom *qcom)
|
|||||||
/* Only usable in contexts where the role can not change. */
|
/* Only usable in contexts where the role can not change. */
|
||||||
static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom)
|
static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom)
|
||||||
{
|
{
|
||||||
struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3);
|
struct dwc3 *dwc;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* FIXME: Fix this layering violation.
|
||||||
|
*/
|
||||||
|
dwc = platform_get_drvdata(qcom->dwc3);
|
||||||
|
|
||||||
|
/* Core driver may not have probed yet. */
|
||||||
|
if (!dwc)
|
||||||
|
return false;
|
||||||
|
|
||||||
return dwc->xhci;
|
return dwc->xhci;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -915,8 +915,11 @@ static void __gs_console_push(struct gs_console *cons)
|
|||||||
}
|
}
|
||||||
|
|
||||||
req->length = size;
|
req->length = size;
|
||||||
|
|
||||||
|
spin_unlock_irq(&cons->lock);
|
||||||
if (usb_ep_queue(ep, req, GFP_ATOMIC))
|
if (usb_ep_queue(ep, req, GFP_ATOMIC))
|
||||||
req->length = 0;
|
req->length = 0;
|
||||||
|
spin_lock_irq(&cons->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gs_console_work(struct work_struct *work)
|
static void gs_console_work(struct work_struct *work)
|
||||||
|
|||||||
@@ -518,7 +518,9 @@ static int mmphw_probe(struct platform_device *pdev)
|
|||||||
ret = -ENOENT;
|
ret = -ENOENT;
|
||||||
goto failed;
|
goto failed;
|
||||||
}
|
}
|
||||||
clk_prepare_enable(ctrl->clk);
|
ret = clk_prepare_enable(ctrl->clk);
|
||||||
|
if (ret)
|
||||||
|
goto failed;
|
||||||
|
|
||||||
/* init global regs */
|
/* init global regs */
|
||||||
ctrl_set_default(ctrl);
|
ctrl_set_default(ctrl);
|
||||||
|
|||||||
@@ -571,11 +571,9 @@ static void virtio_mmio_release_dev(struct device *_d)
|
|||||||
{
|
{
|
||||||
struct virtio_device *vdev =
|
struct virtio_device *vdev =
|
||||||
container_of(_d, struct virtio_device, dev);
|
container_of(_d, struct virtio_device, dev);
|
||||||
struct virtio_mmio_device *vm_dev =
|
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
|
||||||
container_of(vdev, struct virtio_mmio_device, vdev);
|
|
||||||
struct platform_device *pdev = vm_dev->pdev;
|
|
||||||
|
|
||||||
devm_kfree(&pdev->dev, vm_dev);
|
kfree(vm_dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Platform device */
|
/* Platform device */
|
||||||
@@ -586,7 +584,7 @@ static int virtio_mmio_probe(struct platform_device *pdev)
|
|||||||
unsigned long magic;
|
unsigned long magic;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
vm_dev = devm_kzalloc(&pdev->dev, sizeof(*vm_dev), GFP_KERNEL);
|
vm_dev = kzalloc(sizeof(*vm_dev), GFP_KERNEL);
|
||||||
if (!vm_dev)
|
if (!vm_dev)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
|||||||
+1
-2
@@ -4459,8 +4459,7 @@ int btrfs_cancel_balance(struct btrfs_fs_info *fs_info)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
BUG_ON(fs_info->balance_ctl ||
|
ASSERT(!test_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags));
|
||||||
test_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags));
|
|
||||||
atomic_dec(&fs_info->balance_cancel_req);
|
atomic_dec(&fs_info->balance_cancel_req);
|
||||||
mutex_unlock(&fs_info->balance_mutex);
|
mutex_unlock(&fs_info->balance_mutex);
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
+1
-1
@@ -4580,9 +4580,9 @@ static int cifs_readpage_worker(struct file *file, struct page *page,
|
|||||||
|
|
||||||
io_error:
|
io_error:
|
||||||
kunmap(page);
|
kunmap(page);
|
||||||
unlock_page(page);
|
|
||||||
|
|
||||||
read_complete:
|
read_complete:
|
||||||
|
unlock_page(page);
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
+15
-11
@@ -1017,7 +1017,14 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
|
|||||||
{
|
{
|
||||||
struct gfs2_sbd *sdp = root->d_sb->s_fs_info;
|
struct gfs2_sbd *sdp = root->d_sb->s_fs_info;
|
||||||
struct gfs2_args *args = &sdp->sd_args;
|
struct gfs2_args *args = &sdp->sd_args;
|
||||||
int val;
|
unsigned int logd_secs, statfs_slow, statfs_quantum, quota_quantum;
|
||||||
|
|
||||||
|
spin_lock(&sdp->sd_tune.gt_spin);
|
||||||
|
logd_secs = sdp->sd_tune.gt_logd_secs;
|
||||||
|
quota_quantum = sdp->sd_tune.gt_quota_quantum;
|
||||||
|
statfs_quantum = sdp->sd_tune.gt_statfs_quantum;
|
||||||
|
statfs_slow = sdp->sd_tune.gt_statfs_slow;
|
||||||
|
spin_unlock(&sdp->sd_tune.gt_spin);
|
||||||
|
|
||||||
if (is_ancestor(root, sdp->sd_master_dir))
|
if (is_ancestor(root, sdp->sd_master_dir))
|
||||||
seq_puts(s, ",meta");
|
seq_puts(s, ",meta");
|
||||||
@@ -1072,17 +1079,14 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
|
|||||||
}
|
}
|
||||||
if (args->ar_discard)
|
if (args->ar_discard)
|
||||||
seq_puts(s, ",discard");
|
seq_puts(s, ",discard");
|
||||||
val = sdp->sd_tune.gt_logd_secs;
|
if (logd_secs != 30)
|
||||||
if (val != 30)
|
seq_printf(s, ",commit=%d", logd_secs);
|
||||||
seq_printf(s, ",commit=%d", val);
|
if (statfs_quantum != 30)
|
||||||
val = sdp->sd_tune.gt_statfs_quantum;
|
seq_printf(s, ",statfs_quantum=%d", statfs_quantum);
|
||||||
if (val != 30)
|
else if (statfs_slow)
|
||||||
seq_printf(s, ",statfs_quantum=%d", val);
|
|
||||||
else if (sdp->sd_tune.gt_statfs_slow)
|
|
||||||
seq_puts(s, ",statfs_quantum=0");
|
seq_puts(s, ",statfs_quantum=0");
|
||||||
val = sdp->sd_tune.gt_quota_quantum;
|
if (quota_quantum != 60)
|
||||||
if (val != 60)
|
seq_printf(s, ",quota_quantum=%d", quota_quantum);
|
||||||
seq_printf(s, ",quota_quantum=%d", val);
|
|
||||||
if (args->ar_statfs_percent)
|
if (args->ar_statfs_percent)
|
||||||
seq_printf(s, ",statfs_percent=%d", args->ar_statfs_percent);
|
seq_printf(s, ",statfs_percent=%d", args->ar_statfs_percent);
|
||||||
if (args->ar_errors != GFS2_ERRORS_DEFAULT) {
|
if (args->ar_errors != GFS2_ERRORS_DEFAULT) {
|
||||||
|
|||||||
@@ -2027,6 +2027,9 @@ dbAllocDmapLev(struct bmap * bmp,
|
|||||||
if (dbFindLeaf((dmtree_t *) & dp->tree, l2nb, &leafidx))
|
if (dbFindLeaf((dmtree_t *) & dp->tree, l2nb, &leafidx))
|
||||||
return -ENOSPC;
|
return -ENOSPC;
|
||||||
|
|
||||||
|
if (leafidx < 0)
|
||||||
|
return -EIO;
|
||||||
|
|
||||||
/* determine the block number within the file system corresponding
|
/* determine the block number within the file system corresponding
|
||||||
* to the leaf at which free space was found.
|
* to the leaf at which free space was found.
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -354,6 +354,11 @@ tid_t txBegin(struct super_block *sb, int flag)
|
|||||||
jfs_info("txBegin: flag = 0x%x", flag);
|
jfs_info("txBegin: flag = 0x%x", flag);
|
||||||
log = JFS_SBI(sb)->log;
|
log = JFS_SBI(sb)->log;
|
||||||
|
|
||||||
|
if (!log) {
|
||||||
|
jfs_error(sb, "read-only filesystem\n");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
TXN_LOCK();
|
TXN_LOCK();
|
||||||
|
|
||||||
INCREMENT(TxStat.txBegin);
|
INCREMENT(TxStat.txBegin);
|
||||||
|
|||||||
@@ -798,6 +798,11 @@ static int jfs_link(struct dentry *old_dentry,
|
|||||||
if (rc)
|
if (rc)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
if (isReadOnly(ip)) {
|
||||||
|
jfs_error(ip->i_sb, "read-only filesystem\n");
|
||||||
|
return -EROFS;
|
||||||
|
}
|
||||||
|
|
||||||
tid = txBegin(ip->i_sb, 0);
|
tid = txBegin(ip->i_sb, 0);
|
||||||
|
|
||||||
mutex_lock_nested(&JFS_IP(dir)->commit_mutex, COMMIT_MUTEX_PARENT);
|
mutex_lock_nested(&JFS_IP(dir)->commit_mutex, COMMIT_MUTEX_PARENT);
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ struct ovl_sb {
|
|||||||
};
|
};
|
||||||
|
|
||||||
struct ovl_layer {
|
struct ovl_layer {
|
||||||
|
/* ovl_free_fs() relies on @mnt being the first member! */
|
||||||
struct vfsmount *mnt;
|
struct vfsmount *mnt;
|
||||||
/* Trap in ovl inode cache */
|
/* Trap in ovl inode cache */
|
||||||
struct inode *trap;
|
struct inode *trap;
|
||||||
@@ -41,6 +42,14 @@ struct ovl_layer {
|
|||||||
int fsid;
|
int fsid;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* ovl_free_fs() relies on @mnt being the first member when unmounting
|
||||||
|
* the private mounts created for each layer. Let's check both the
|
||||||
|
* offset and type.
|
||||||
|
*/
|
||||||
|
static_assert(offsetof(struct ovl_layer, mnt) == 0);
|
||||||
|
static_assert(__same_type(typeof_member(struct ovl_layer, mnt), struct vfsmount *));
|
||||||
|
|
||||||
struct ovl_path {
|
struct ovl_path {
|
||||||
const struct ovl_layer *layer;
|
const struct ovl_layer *layer;
|
||||||
struct dentry *dentry;
|
struct dentry *dentry;
|
||||||
|
|||||||
+3
-2
@@ -557,7 +557,7 @@ restart:
|
|||||||
continue;
|
continue;
|
||||||
/* Wait for dquot users */
|
/* Wait for dquot users */
|
||||||
if (atomic_read(&dquot->dq_count)) {
|
if (atomic_read(&dquot->dq_count)) {
|
||||||
dqgrab(dquot);
|
atomic_inc(&dquot->dq_count);
|
||||||
spin_unlock(&dq_list_lock);
|
spin_unlock(&dq_list_lock);
|
||||||
/*
|
/*
|
||||||
* Once dqput() wakes us up, we know it's time to free
|
* Once dqput() wakes us up, we know it's time to free
|
||||||
@@ -2415,7 +2415,8 @@ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
|
|||||||
|
|
||||||
error = add_dquot_ref(sb, type);
|
error = add_dquot_ref(sb, type);
|
||||||
if (error)
|
if (error)
|
||||||
dquot_disable(sb, type, flags);
|
dquot_disable(sb, type,
|
||||||
|
DQUOT_USAGE_ENABLED | DQUOT_LIMITS_ENABLED);
|
||||||
|
|
||||||
return error;
|
return error;
|
||||||
out_fmt:
|
out_fmt:
|
||||||
|
|||||||
+1
-1
@@ -247,7 +247,7 @@ static int udf_name_from_CS0(struct super_block *sb,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (translate) {
|
if (translate) {
|
||||||
if (str_o_len <= 2 && str_o[0] == '.' &&
|
if (str_o_len > 0 && str_o_len <= 2 && str_o[0] == '.' &&
|
||||||
(str_o_len == 1 || str_o[1] == '.'))
|
(str_o_len == 1 || str_o[1] == '.'))
|
||||||
needsCRC = 1;
|
needsCRC = 1;
|
||||||
if (needsCRC) {
|
if (needsCRC) {
|
||||||
|
|||||||
@@ -0,0 +1,21 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0 */
|
||||||
|
|
||||||
|
#ifndef _DT_BINDINGS_ADI_AD74413R_H
|
||||||
|
#define _DT_BINDINGS_ADI_AD74413R_H
|
||||||
|
|
||||||
|
#define CH_FUNC_HIGH_IMPEDANCE 0x0
|
||||||
|
#define CH_FUNC_VOLTAGE_OUTPUT 0x1
|
||||||
|
#define CH_FUNC_CURRENT_OUTPUT 0x2
|
||||||
|
#define CH_FUNC_VOLTAGE_INPUT 0x3
|
||||||
|
#define CH_FUNC_CURRENT_INPUT_EXT_POWER 0x4
|
||||||
|
#define CH_FUNC_CURRENT_INPUT_LOOP_POWER 0x5
|
||||||
|
#define CH_FUNC_RESISTANCE_INPUT 0x6
|
||||||
|
#define CH_FUNC_DIGITAL_INPUT_LOGIC 0x7
|
||||||
|
#define CH_FUNC_DIGITAL_INPUT_LOOP_POWER 0x8
|
||||||
|
#define CH_FUNC_CURRENT_INPUT_EXT_POWER_HART 0x9
|
||||||
|
#define CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART 0xA
|
||||||
|
|
||||||
|
#define CH_FUNC_MIN CH_FUNC_HIGH_IMPEDANCE
|
||||||
|
#define CH_FUNC_MAX CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART
|
||||||
|
|
||||||
|
#endif /* _DT_BINDINGS_ADI_AD74413R_H */
|
||||||
@@ -53,6 +53,7 @@
|
|||||||
} \
|
} \
|
||||||
if (__sleep_us) \
|
if (__sleep_us) \
|
||||||
usleep_range((__sleep_us >> 2) + 1, __sleep_us); \
|
usleep_range((__sleep_us >> 2) + 1, __sleep_us); \
|
||||||
|
cpu_relax(); \
|
||||||
} \
|
} \
|
||||||
(cond) ? 0 : -ETIMEDOUT; \
|
(cond) ? 0 : -ETIMEDOUT; \
|
||||||
})
|
})
|
||||||
@@ -95,6 +96,7 @@
|
|||||||
} \
|
} \
|
||||||
if (__delay_us) \
|
if (__delay_us) \
|
||||||
udelay(__delay_us); \
|
udelay(__delay_us); \
|
||||||
|
cpu_relax(); \
|
||||||
} \
|
} \
|
||||||
(cond) ? 0 : -ETIMEDOUT; \
|
(cond) ? 0 : -ETIMEDOUT; \
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -303,6 +303,7 @@ struct mhi_controller_config {
|
|||||||
* @rddm_size: RAM dump size that host should allocate for debugging purpose
|
* @rddm_size: RAM dump size that host should allocate for debugging purpose
|
||||||
* @sbl_size: SBL image size downloaded through BHIe (optional)
|
* @sbl_size: SBL image size downloaded through BHIe (optional)
|
||||||
* @seg_len: BHIe vector size (optional)
|
* @seg_len: BHIe vector size (optional)
|
||||||
|
* @reg_len: Length of the MHI MMIO region (required)
|
||||||
* @fbc_image: Points to firmware image buffer
|
* @fbc_image: Points to firmware image buffer
|
||||||
* @rddm_image: Points to RAM dump buffer
|
* @rddm_image: Points to RAM dump buffer
|
||||||
* @mhi_chan: Points to the channel configuration table
|
* @mhi_chan: Points to the channel configuration table
|
||||||
@@ -383,6 +384,7 @@ struct mhi_controller {
|
|||||||
size_t rddm_size;
|
size_t rddm_size;
|
||||||
size_t sbl_size;
|
size_t sbl_size;
|
||||||
size_t seg_len;
|
size_t seg_len;
|
||||||
|
size_t reg_len;
|
||||||
struct image_info *fbc_image;
|
struct image_info *fbc_image;
|
||||||
struct image_info *rddm_image;
|
struct image_info *rddm_image;
|
||||||
struct mhi_chan *mhi_chan;
|
struct mhi_chan *mhi_chan;
|
||||||
|
|||||||
@@ -644,18 +644,22 @@ struct mlx5_pps {
|
|||||||
u8 enabled;
|
u8 enabled;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mlx5_clock {
|
struct mlx5_timer {
|
||||||
struct mlx5_nb pps_nb;
|
|
||||||
seqlock_t lock;
|
|
||||||
struct cyclecounter cycles;
|
struct cyclecounter cycles;
|
||||||
struct timecounter tc;
|
struct timecounter tc;
|
||||||
struct hwtstamp_config hwtstamp_config;
|
|
||||||
u32 nominal_c_mult;
|
u32 nominal_c_mult;
|
||||||
unsigned long overflow_period;
|
unsigned long overflow_period;
|
||||||
struct delayed_work overflow_work;
|
struct delayed_work overflow_work;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct mlx5_clock {
|
||||||
|
struct mlx5_nb pps_nb;
|
||||||
|
seqlock_t lock;
|
||||||
|
struct hwtstamp_config hwtstamp_config;
|
||||||
struct ptp_clock *ptp;
|
struct ptp_clock *ptp;
|
||||||
struct ptp_clock_info ptp_info;
|
struct ptp_clock_info ptp_info;
|
||||||
struct mlx5_pps pps_info;
|
struct mlx5_pps pps_info;
|
||||||
|
struct mlx5_timer timer;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mlx5_dm;
|
struct mlx5_dm;
|
||||||
|
|||||||
@@ -503,6 +503,7 @@ struct mmc_host {
|
|||||||
struct device_node;
|
struct device_node;
|
||||||
|
|
||||||
struct mmc_host *mmc_alloc_host(int extra, struct device *);
|
struct mmc_host *mmc_alloc_host(int extra, struct device *);
|
||||||
|
struct mmc_host *devm_mmc_alloc_host(struct device *dev, int extra);
|
||||||
int mmc_add_host(struct mmc_host *);
|
int mmc_add_host(struct mmc_host *);
|
||||||
void mmc_remove_host(struct mmc_host *);
|
void mmc_remove_host(struct mmc_host *);
|
||||||
void mmc_free_host(struct mmc_host *);
|
void mmc_free_host(struct mmc_host *);
|
||||||
|
|||||||
@@ -71,6 +71,23 @@ struct unwind_hint {
|
|||||||
static void __used __section(".discard.func_stack_frame_non_standard") \
|
static void __used __section(".discard.func_stack_frame_non_standard") \
|
||||||
*__func_stack_frame_non_standard_##func = func
|
*__func_stack_frame_non_standard_##func = func
|
||||||
|
|
||||||
|
/*
|
||||||
|
* STACK_FRAME_NON_STANDARD_FP() is a frame-pointer-specific function ignore
|
||||||
|
* for the case where a function is intentionally missing frame pointer setup,
|
||||||
|
* but otherwise needs objtool/ORC coverage when frame pointers are disabled.
|
||||||
|
*/
|
||||||
|
#ifdef CONFIG_FRAME_POINTER
|
||||||
|
#define STACK_FRAME_NON_STANDARD_FP(func) STACK_FRAME_NON_STANDARD(func)
|
||||||
|
#else
|
||||||
|
#define STACK_FRAME_NON_STANDARD_FP(func)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define ANNOTATE_NOENDBR \
|
||||||
|
"986: \n\t" \
|
||||||
|
".pushsection .discard.noendbr\n\t" \
|
||||||
|
_ASM_PTR " 986b\n\t" \
|
||||||
|
".popsection\n\t"
|
||||||
|
|
||||||
#else /* __ASSEMBLY__ */
|
#else /* __ASSEMBLY__ */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -117,6 +134,13 @@ struct unwind_hint {
|
|||||||
.popsection
|
.popsection
|
||||||
.endm
|
.endm
|
||||||
|
|
||||||
|
.macro ANNOTATE_NOENDBR
|
||||||
|
.Lhere_\@:
|
||||||
|
.pushsection .discard.noendbr
|
||||||
|
.quad .Lhere_\@
|
||||||
|
.popsection
|
||||||
|
.endm
|
||||||
|
|
||||||
#endif /* __ASSEMBLY__ */
|
#endif /* __ASSEMBLY__ */
|
||||||
|
|
||||||
#else /* !CONFIG_STACK_VALIDATION */
|
#else /* !CONFIG_STACK_VALIDATION */
|
||||||
@@ -126,10 +150,14 @@ struct unwind_hint {
|
|||||||
#define UNWIND_HINT(sp_reg, sp_offset, type, end) \
|
#define UNWIND_HINT(sp_reg, sp_offset, type, end) \
|
||||||
"\n\t"
|
"\n\t"
|
||||||
#define STACK_FRAME_NON_STANDARD(func)
|
#define STACK_FRAME_NON_STANDARD(func)
|
||||||
|
#define STACK_FRAME_NON_STANDARD_FP(func)
|
||||||
|
#define ANNOTATE_NOENDBR
|
||||||
#else
|
#else
|
||||||
#define ANNOTATE_INTRA_FUNCTION_CALL
|
#define ANNOTATE_INTRA_FUNCTION_CALL
|
||||||
.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
|
.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
|
||||||
.endm
|
.endm
|
||||||
|
.macro ANNOTATE_NOENDBR
|
||||||
|
.endm
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* CONFIG_STACK_VALIDATION */
|
#endif /* CONFIG_STACK_VALIDATION */
|
||||||
|
|||||||
@@ -148,6 +148,10 @@ retry:
|
|||||||
if (gso_type & SKB_GSO_UDP)
|
if (gso_type & SKB_GSO_UDP)
|
||||||
nh_off -= thlen;
|
nh_off -= thlen;
|
||||||
|
|
||||||
|
/* Kernel has a special handling for GSO_BY_FRAGS. */
|
||||||
|
if (gso_size == GSO_BY_FRAGS)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
/* Too small packets are not really GSO ones. */
|
/* Too small packets are not really GSO ones. */
|
||||||
if (skb->len - nh_off > gso_size) {
|
if (skb->len - nh_off > gso_size) {
|
||||||
shinfo->gso_size = gso_size;
|
shinfo->gso_size = gso_size;
|
||||||
|
|||||||
@@ -588,7 +588,14 @@ void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx,
|
|||||||
static inline
|
static inline
|
||||||
unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
|
unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
|
||||||
{
|
{
|
||||||
return m2m_ctx->out_q_ctx.num_rdy;
|
unsigned int num_buf_rdy;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);
|
||||||
|
num_buf_rdy = m2m_ctx->out_q_ctx.num_rdy;
|
||||||
|
spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);
|
||||||
|
|
||||||
|
return num_buf_rdy;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -600,7 +607,14 @@ unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
|
|||||||
static inline
|
static inline
|
||||||
unsigned int v4l2_m2m_num_dst_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
|
unsigned int v4l2_m2m_num_dst_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
|
||||||
{
|
{
|
||||||
return m2m_ctx->cap_q_ctx.num_rdy;
|
unsigned int num_buf_rdy;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);
|
||||||
|
num_buf_rdy = m2m_ctx->cap_q_ctx.num_rdy;
|
||||||
|
spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);
|
||||||
|
|
||||||
|
return num_buf_rdy;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -1361,6 +1361,12 @@ static inline bool sk_has_memory_pressure(const struct sock *sk)
|
|||||||
return sk->sk_prot->memory_pressure != NULL;
|
return sk->sk_prot->memory_pressure != NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool sk_under_global_memory_pressure(const struct sock *sk)
|
||||||
|
{
|
||||||
|
return sk->sk_prot->memory_pressure &&
|
||||||
|
!!*sk->sk_prot->memory_pressure;
|
||||||
|
}
|
||||||
|
|
||||||
static inline bool sk_under_memory_pressure(const struct sock *sk)
|
static inline bool sk_under_memory_pressure(const struct sock *sk)
|
||||||
{
|
{
|
||||||
if (!sk->sk_prot->memory_pressure)
|
if (!sk->sk_prot->memory_pressure)
|
||||||
|
|||||||
+2
-2
@@ -43,13 +43,13 @@ void *dma_common_contiguous_remap(struct page *page, size_t size,
|
|||||||
void *vaddr;
|
void *vaddr;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
|
pages = kvmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
|
||||||
if (!pages)
|
if (!pages)
|
||||||
return NULL;
|
return NULL;
|
||||||
for (i = 0; i < count; i++)
|
for (i = 0; i < count; i++)
|
||||||
pages[i] = nth_page(page, i);
|
pages[i] = nth_page(page, i);
|
||||||
vaddr = vmap(pages, count, VM_DMA_COHERENT, prot);
|
vaddr = vmap(pages, count, VM_DMA_COHERENT, prot);
|
||||||
kfree(pages);
|
kvfree(pages);
|
||||||
|
|
||||||
return vaddr;
|
return vaddr;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -541,6 +541,7 @@ struct trace_buffer {
|
|||||||
unsigned flags;
|
unsigned flags;
|
||||||
int cpus;
|
int cpus;
|
||||||
atomic_t record_disabled;
|
atomic_t record_disabled;
|
||||||
|
atomic_t resizing;
|
||||||
cpumask_var_t cpumask;
|
cpumask_var_t cpumask;
|
||||||
|
|
||||||
struct lock_class_key *reader_lock_key;
|
struct lock_class_key *reader_lock_key;
|
||||||
@@ -2041,7 +2042,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
|
|||||||
|
|
||||||
/* prevent another thread from changing buffer sizes */
|
/* prevent another thread from changing buffer sizes */
|
||||||
mutex_lock(&buffer->mutex);
|
mutex_lock(&buffer->mutex);
|
||||||
|
atomic_inc(&buffer->resizing);
|
||||||
|
|
||||||
if (cpu_id == RING_BUFFER_ALL_CPUS) {
|
if (cpu_id == RING_BUFFER_ALL_CPUS) {
|
||||||
/*
|
/*
|
||||||
@@ -2184,6 +2185,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
|
|||||||
atomic_dec(&buffer->record_disabled);
|
atomic_dec(&buffer->record_disabled);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
atomic_dec(&buffer->resizing);
|
||||||
mutex_unlock(&buffer->mutex);
|
mutex_unlock(&buffer->mutex);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
@@ -2204,6 +2206,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
out_err_unlock:
|
out_err_unlock:
|
||||||
|
atomic_dec(&buffer->resizing);
|
||||||
mutex_unlock(&buffer->mutex);
|
mutex_unlock(&buffer->mutex);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
@@ -5253,6 +5256,15 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a,
|
|||||||
if (local_read(&cpu_buffer_b->committing))
|
if (local_read(&cpu_buffer_b->committing))
|
||||||
goto out_dec;
|
goto out_dec;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* When resize is in progress, we cannot swap it because
|
||||||
|
* it will mess the state of the cpu buffer.
|
||||||
|
*/
|
||||||
|
if (atomic_read(&buffer_a->resizing))
|
||||||
|
goto out_dec;
|
||||||
|
if (atomic_read(&buffer_b->resizing))
|
||||||
|
goto out_dec;
|
||||||
|
|
||||||
buffer_a->buffers[cpu] = cpu_buffer_b;
|
buffer_a->buffers[cpu] = cpu_buffer_b;
|
||||||
buffer_b->buffers[cpu] = cpu_buffer_a;
|
buffer_b->buffers[cpu] = cpu_buffer_a;
|
||||||
|
|
||||||
|
|||||||
@@ -1883,9 +1883,10 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
|
|||||||
* place on this CPU. We fail to record, but we reset
|
* place on this CPU. We fail to record, but we reset
|
||||||
* the max trace buffer (no one writes directly to it)
|
* the max trace buffer (no one writes directly to it)
|
||||||
* and flag that it failed.
|
* and flag that it failed.
|
||||||
|
* Another reason is resize is in progress.
|
||||||
*/
|
*/
|
||||||
trace_array_printk_buf(tr->max_buffer.buffer, _THIS_IP_,
|
trace_array_printk_buf(tr->max_buffer.buffer, _THIS_IP_,
|
||||||
"Failed to swap buffers due to commit in progress\n");
|
"Failed to swap buffers due to commit or resize in progress\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY);
|
WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY);
|
||||||
|
|||||||
@@ -1332,9 +1332,10 @@ probe_mem_read(void *dest, void *src, size_t size)
|
|||||||
|
|
||||||
/* Note that we don't verify it, since the code does not come from user space */
|
/* Note that we don't verify it, since the code does not come from user space */
|
||||||
static int
|
static int
|
||||||
process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest,
|
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
|
||||||
void *base)
|
void *base)
|
||||||
{
|
{
|
||||||
|
struct pt_regs *regs = rec;
|
||||||
unsigned long val;
|
unsigned long val;
|
||||||
|
|
||||||
retry:
|
retry:
|
||||||
|
|||||||
@@ -54,7 +54,7 @@ fetch_apply_bitfield(struct fetch_insn *code, void *buf)
|
|||||||
* If dest is NULL, don't store result and return required dynamic data size.
|
* If dest is NULL, don't store result and return required dynamic data size.
|
||||||
*/
|
*/
|
||||||
static int
|
static int
|
||||||
process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs,
|
process_fetch_insn(struct fetch_insn *code, void *rec,
|
||||||
void *dest, void *base);
|
void *dest, void *base);
|
||||||
static nokprobe_inline int fetch_store_strlen(unsigned long addr);
|
static nokprobe_inline int fetch_store_strlen(unsigned long addr);
|
||||||
static nokprobe_inline int
|
static nokprobe_inline int
|
||||||
@@ -190,7 +190,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs)
|
|||||||
|
|
||||||
/* Store the value of each argument */
|
/* Store the value of each argument */
|
||||||
static nokprobe_inline void
|
static nokprobe_inline void
|
||||||
store_trace_args(void *data, struct trace_probe *tp, struct pt_regs *regs,
|
store_trace_args(void *data, struct trace_probe *tp, void *rec,
|
||||||
int header_size, int maxlen)
|
int header_size, int maxlen)
|
||||||
{
|
{
|
||||||
struct probe_arg *arg;
|
struct probe_arg *arg;
|
||||||
@@ -205,12 +205,14 @@ store_trace_args(void *data, struct trace_probe *tp, struct pt_regs *regs,
|
|||||||
/* Point the dynamic data area if needed */
|
/* Point the dynamic data area if needed */
|
||||||
if (unlikely(arg->dynamic))
|
if (unlikely(arg->dynamic))
|
||||||
*dl = make_data_loc(maxlen, dyndata - base);
|
*dl = make_data_loc(maxlen, dyndata - base);
|
||||||
ret = process_fetch_insn(arg->code, regs, dl, base);
|
ret = process_fetch_insn(arg->code, rec, dl, base);
|
||||||
if (unlikely(ret < 0 && arg->dynamic)) {
|
if (arg->dynamic) {
|
||||||
*dl = make_data_loc(0, dyndata - base);
|
if (unlikely(ret < 0)) {
|
||||||
} else {
|
*dl = make_data_loc(0, dyndata - base);
|
||||||
dyndata += ret;
|
} else {
|
||||||
maxlen -= ret;
|
dyndata += ret;
|
||||||
|
maxlen -= ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -217,9 +217,10 @@ static unsigned long translate_user_vaddr(unsigned long file_offset)
|
|||||||
|
|
||||||
/* Note that we don't verify it, since the code does not come from user space */
|
/* Note that we don't verify it, since the code does not come from user space */
|
||||||
static int
|
static int
|
||||||
process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest,
|
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
|
||||||
void *base)
|
void *base)
|
||||||
{
|
{
|
||||||
|
struct pt_regs *regs = rec;
|
||||||
unsigned long val;
|
unsigned long val;
|
||||||
|
|
||||||
/* 1st stage: get value from context */
|
/* 1st stage: get value from context */
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user