Remotehey

Work anywhere, Live anywhere

Mercor - remotehey
Mercor

PK/PD Modeling / Pharmacometrics Lead

amsterdam, north holland, netherlands / Posted
APPLY

This person complements the client’s “Translational / Clinical Pharmacology Decision-Maker” team by grounding dose selection and exposure–response analysis in quantitative structure and parameter plausibility.

Who We’re Looking For

  • Deep hands-on experience in PK, PD, exposure–response modeling, and ideally population PK or QSP.
  • Expert at model fitting, sensitivity analysis, and identifying non-plausible parameter spaces.
  • Can evaluate the validity of dose–exposure predictions and detect high-risk extrapolations.
  • Comfortable designing model evaluation rubrics that distinguish between acceptable vs. non-credible outputs.
  • Able to articulate how quantitative checks should complement narrative decision logic.


Nice-to-have:

  • Experience supporting translational or clinical pharmacology leads in dose justification.
  • Familiarity with integrating nonclinical PK/PD data (2-species GLP → human FIH extrapolation).


Experience Level

  • :8–12 years of quantitative pharmacology experience in pharma, CROs, or modeling consultancies.
  • Strong portfolio in population PK/PD, exposure–response, and parameter estimation using NONMEM, Monolix, or equivalent tools.
  • Demonstrated ability to interpret model results for decision-making, not just fit data.
  • Can create fit-for-purpose models and critique model structures or assumptions under uncertainty.


Expectations

  • Design and refine micro-evaluations for PK/PD performance (curve fits, parameter checks, error taxonomies).
  • Encode quantitative sanity checks into model rubrics for automated evaluation.
  • Define failure conditions (e.g., unsafe extrapolation, poor coverage curves, invalid assumptions).


Inputs we give:

  • PK/PD datasets, tox summaries, and performance prompts (e.g., “fit exposure–response curves, interpret safety margins”).
  • Example model outputs from automated systems.


Expected outputs:

  • Quantitative Rubrics: clear thresholds for acceptable parameter fits, coverage curve quality, and model integrity checks.
  • Golden Fit Examples: representative “ideal” PK/PD model outputs and visualizations for calibration.
  • Error Taxonomy: structured list of typical modeling or fitting errors, with root-cause annotations.
  • Meta-Layer Commentary: short note per rubric capturing how expert modelers recognize implausible or unsafe fits beyond numeric error values.


Engagement Model & Compensation

  • Contract / part-time, remote, outcome-based deliverables.