Why the Same Peptide Produces Different Binding Values Across Platforms

A researcher comparing published binding data for a peptide of interest will frequently encounter a disorienting range of dissociation constants. One study reports a Kd of 2 nM by surface plasmon resonance (SPR); another cites 45 nM by enzyme-linked immunosorbent assay (ELISA); a third measures 180 nM by fluorescence polarization (FP). The peptide is nominally identical in each case. The receptor is the same. Yet the numbers differ by nearly two orders of magnitude.

This is not an anomaly. It is the expected consequence of measuring a thermodynamic and kinetic phenomenon — molecular binding — through instruments that impose fundamentally different physical constraints on that phenomenon [1]. Interpreting binding data responsibly requires understanding what each platform actually measures, what artifacts it introduces, and what assumptions underlie the conversion of raw signal into a reported affinity value.

Surface Plasmon Resonance and Biolayer Interferometry: Kinetics at a Surface

SPR and its close relative biolayer interferometry (BLI) are optical techniques that detect mass accumulation at a sensor surface in real time. One binding partner — typically the receptor or target protein — is immobilized on that surface, and the analyte (often the peptide) flows across it in solution. The resulting sensorgram traces association and dissociation phases, from which rate constants kon and koff are extracted, and Kd is calculated as their ratio.

The surface-immobilization step introduces several systematic distortions. If the receptor is coupled through a primary amine near its binding site, steric occlusion can reduce apparent affinity relative to a solution-phase measurement. Conversely, if the receptor is immobilized at high density, a small peptide analyte may rebind before fully dissociating — a phenomenon called mass transport limitation — causing the measured koff to appear slower than it truly is and artificially inflating apparent affinity [1]. Well-designed SPR experiments test for mass transport effects by varying analyte flow rate and receptor surface density, but not all published studies report these controls.

BLI shares the surface-immobilization requirement and is therefore susceptible to similar artifacts, though the fiber-optic tip format allows somewhat more flexible assay geometries. Both techniques generate genuine kinetic data — a meaningful advantage over equilibrium-only methods — but that advantage is only realized when the experimental design adequately addresses surface-related confounds.

Solution-Phase Methods: Equilibrium Without Surfaces

Fluorescence polarization and isothermal titration calorimetry (ITC) measure binding in homogeneous solution, eliminating surface immobilization as a variable. In FP, a fluorescently labelled peptide tumbles more slowly when bound to a larger receptor, producing a measurable change in polarization signal. ITC directly measures the heat released or absorbed during binding, yielding not only Kd but also enthalpy, entropy, and stoichiometry from a single experiment [2].

Solution-phase methods are generally considered closer to physiological conditions, but they carry their own limitations. FP requires fluorescent labelling of the peptide, and the label itself can perturb binding if positioned near the interaction interface. Non-specific interactions between the fluorophore and hydrophobic regions of the receptor can generate false-positive polarization signals, particularly at higher protein concentrations [3]. ITC demands relatively large quantities of both binding partners and is poorly suited to very low-affinity interactions (Kd above roughly 100 µM) or very tight interactions (Kd below approximately 1 nM), where the binding isotherm becomes too steep or too shallow to fit reliably.

ELISA and Plate-Based Assays: Avidity and Apparent Affinity

ELISA formats introduce a distinct and frequently underappreciated source of systematic error: avidity. When a receptor is coated onto a microplate well and a multivalent detection antibody or a peptide with multiple binding epitopes is applied, the effective affinity measured reflects the sum of multiple simultaneous interactions rather than a single bimolecular event [4]. This avidity effect can make apparent affinities appear orders of magnitude tighter than the true monovalent Kd.

Even for monovalent peptides, surface coating of the receptor can alter its conformation, partially denature it, or block its binding site depending on the coating chemistry and buffer conditions. A 10-fold Kd difference between SPR and ELISA for the same peptide-receptor pair may therefore reflect the avidity architecture of the plate-based assay rather than any true difference in molecular affinity — a distinction with direct implications for how the data should be interpreted and weighted.


Evaluating Reproducibility: The Statistical Markers of Data Quality

Beyond platform-specific artifacts, the reliability of any single binding measurement depends on how consistently it can be reproduced. Two metrics are central to this evaluation: the coefficient of variation (CV%) and inter-assay reproducibility.

Coefficient of Variation and Intra-Assay Precision

The CV% — calculated as the standard deviation divided by the mean, expressed as a percentage — quantifies the spread of replicate measurements within a single experiment. For well-optimized biochemical binding assays, intra-assay CV% values below 10–15% are generally considered acceptable, with values below 5% indicating high precision [2]. CV% values above 20% within a single assay run should prompt investigation of pipetting consistency, signal stability, and reagent homogeneity before the data are interpreted biologically.

It is important to note that low CV% alone does not guarantee accuracy. A highly reproducible assay can consistently measure the wrong value if a systematic artifact — such as mass transport limitation or avidity — is present. Precision and accuracy are orthogonal properties, and both must be evaluated.

Inter-Assay and Inter-Laboratory Reproducibility

Inter-assay reproducibility — how consistently the same result is obtained across different experimental runs, operators, or laboratories — is a more stringent and more biologically meaningful standard. Acceptable inter-assay CV% thresholds are typically set somewhat higher than intra-assay thresholds, often in the 15–20% range, to accommodate legitimate sources of day-to-day variation such as reagent lot differences and instrument calibration drift [2].

Inter-laboratory reproducibility studies, in which the same peptide and receptor are sent to multiple independent groups using the same assay protocol, routinely reveal CV% values of 20–40% even under tightly controlled conditions. This baseline level of variability has direct implications for how confidently a single-laboratory Kd value should be generalized: a reported Kd of 10 nM from one laboratory may represent a true affinity anywhere from roughly 6 to 16 nM under ideal reproducibility conditions, and potentially a wider range if assay optimization was incomplete.


Recognizing Assay-Specific Artifacts

Mass Transport Limitations in SPR

As noted above, mass transport limitation in SPR causes the measured association rate to be limited not by the intrinsic kon of the interaction but by the rate at which analyte diffuses to the sensor surface. The practical consequence is that the calculated Kd appears tighter than it should be. Experiments that do not report flow rate optimization, surface density titration, or explicit mass transport modeling should be treated with caution when the reported Kd is used to draw conclusions about potency [1].

Avidity Effects in Sandwich Immunoassays

In any assay format where the target is captured on a surface and detected by a multivalent reagent — or where the peptide itself is presented in a multivalent context — the measured signal reflects an avidity-enhanced apparent affinity. Researchers comparing such data to solution-phase kinetic measurements should expect the plate-based value to be systematically tighter, sometimes by one to three orders of magnitude, without this representing a genuine difference in monovalent binding strength [4].

Non-Specific Binding in Fluorescence-Based Platforms

Fluorescence polarization and fluorescence resonance energy transfer (FRET) assays are sensitive to non-specific interactions between fluorescent probes and hydrophobic protein surfaces. These interactions generate background signal that can be mistaken for specific binding, particularly at high protein concentrations or in buffers with low ionic strength. Rigorous FP assays include competition controls with unlabelled peptide to confirm that the observed polarization change is displaceable and therefore specific [3].


Quality Control Metrics in Binding Assay Reports

A well-reported binding assay should contain sufficient information for an independent reader to assess data quality without repeating the experiment. Several specific elements are worth examining.

Controls, Standard Curves, and Curve-Fitting Statistics

Positive controls — known binders with established Kd values — and negative controls — scrambled peptides or buffer blanks — serve as internal benchmarks for assay performance on any given day. Their absence from a published report is a meaningful omission. Standard curves, where applicable, should span at least two orders of magnitude around the expected Kd and should be fit with reported R² values and residual plots. An R² above 0.99 with randomly distributed residuals indicates a well-behaved curve fit; systematic curvature in residuals suggests model misspecification [2].

Replicate Numbers and Error Representation

Binding affinity values reported without associated error bars, confidence intervals, or explicit replicate numbers cannot be evaluated for precision. A single-experiment Kd, however carefully measured, carries no statistical weight. The minimum acceptable standard for a publishable Kd determination is typically three independent replicates, with the mean and standard deviation or standard error reported alongside the curve-fit parameters.


Peptide Batch Variability and Its Impact on Binding Data

Binding assay variability does not originate solely from the measurement platform. The peptide itself is a source of heterogeneity that is frequently underappreciated in the interpretation of published data.

Purity, Aggregation, and Modification State

Synthetic peptides of nominally identical sequence can differ substantially in their functional binding properties depending on purity, aggregation state, and the presence of post-synthetic modifications such as oxidation of methionine or cysteine residues [5]. A peptide preparation with 85% purity by HPLC contains 15% impurities that may compete for binding, interfere with signal detection, or alter the aggregation behavior of the active fraction. Aggregated peptide species — which can form in concentrated stock solutions, particularly for hydrophobic sequences — may exhibit apparent binding to surfaces or proteins through non-specific mechanisms, inflating measured affinity.

Studies that report Kd values without specifying peptide purity, storage conditions, or aggregation state (assessed by dynamic light scattering or analytical ultracentrifugation) provide an incomplete basis for comparison with other datasets. Batch-to-batch variation in commercial peptide preparations has been documented to produce inter-assay CV% values of 15–30% for the same nominal compound [5], a contribution to variability that is entirely independent of the assay platform.


Single-Concentration Screening Versus Full Dose-Response Characterization

High-throughput binding screens frequently employ a single analyte concentration to rank-order compounds by percent binding or inhibition. This approach is efficient but generates data that cannot be converted to a reliable Kd without additional assumptions. A compound that shows 50% binding at 1 µM in a single-point screen may have a Kd anywhere from 100 nM to 10 µM depending on the shape of its dose-response curve and the Hill coefficient of the interaction.

Single-concentration data is appropriate for triage — identifying candidates that warrant further investigation — but should not be used to make quantitative affinity comparisons between compounds or to draw conclusions about selectivity. Full dose-response curves, fit with appropriate binding models and reported with confidence intervals on the Kd estimate, are the minimum standard for data that will inform downstream decisions about which peptide interactions merit rigorous kinetic characterization [1].


Red Flags in Binding Assay Reporting

Certain patterns in published binding data should prompt heightened scrutiny before the reported values are incorporated into a broader analysis.

Unexplained changes in assay platform between studies from the same group — for example, SPR in an early paper and ELISA in a follow-up — may reflect optimization of the assay to produce a desired result rather than a principled methodological choice. Missing error bars on Kd values, absence of replicate numbers, and failure to report curve-fit statistics are indicators of incomplete data presentation. Custom or in-house assays that have not been validated against an orthogonal method and have not been described in sufficient detail for independent replication should be treated as preliminary until corroborated.

The use of unvalidated antibodies as capture or detection reagents in plate-based assays is a particularly common source of unreliable binding data. Antibody cross-reactivity and lot-to-lot variability can introduce systematic errors that are invisible without appropriate controls.


Synthesizing Conflicting Binding Data Across Sources

When multiple published studies report different Kd values for the same peptide-receptor interaction, a structured approach to weighting those values is more informative than averaging them or defaulting to the most recent publication.

A practical framework assigns higher weight to studies that: employed solution-phase methods alongside surface-based methods and reported concordance; included full dose-response curves with replicate data and curve-fit statistics; characterized peptide purity and aggregation state; and used validated reagents with documented lot information. Studies that report only single-concentration data, lack controls, or employed a single unvalidated platform should be weighted accordingly — not discarded, but treated as hypothesis-generating rather than definitive [7].

Meta-analytical approaches to binding data are still relatively uncommon in the peptide literature, but the principles of evidence weighting that have been developed for clinical data synthesis apply here: heterogeneity across studies is informative rather than merely inconvenient, and the sources of that heterogeneity — platform, peptide batch, assay conditions — should be characterized rather than averaged away.


Conclusion

Binding affinity is not a fixed property that an assay simply reads off a molecule. It is a parameter estimated through a measurement process that imposes its own constraints, introduces its own artifacts, and requires its own validation. A reported Kd value is best understood as the output of a specific experimental system rather than an intrinsic molecular constant — and the distance between those two things depends entirely on how carefully the assay was designed, executed, and reported.

Researchers who approach binding data with this perspective — examining platform choice, reproducibility metrics, peptide characterization, and reporting completeness before accepting a number — are better positioned to draw reliable conclusions from the preclinical literature and to identify where additional experimental investment is warranted.