The OpenADMET - ExpansionRx Blind Challenge has Come to an End

Maria Castellanos Hugo MacDermott-Opeskin

Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) properties can make or break the preclinical and clinical development of small molecules. At OpenADMET we aim to address the unpredictable nature of these properties through open science, generating high-quality experimental data and building robust predictive models of ADMET properties.

A key component of these efforts is running community blind challenges, designed to benchmark the state-of-the-art in ADMET models from a diverse array of participants on novel datasets. With these goals in mind, on October 27, 2025 we launched the ExpansionRx-OpenADMET blind challenge, in partnership with Expansion Therapeutics and Hugging Face.

The challenge reflected real-world data complexity

For this challenge, participants were tasked with predicting nine crucial endpoints from a real-world drug discovery campaign prosecuted by ExpansionRx targeting RNA-mediated diseases, including Myotonic Dystrophy, Amyotrophic Lateral Sclerosis (ALS), and Dementia. Comprising over 7000 molecules, the Expansion dataset is one of the largest lead-optimization-like datasets available in the public domain (more on this later) and is a highly realistic benchmark for use both in this challenge and in the future.

When designing a challenge to test the utility of machine learning models in drug discovery, it is crucial to frame it in a way that mirrors the real challenges faced by medicinal chemistry and data science teams. To achieve this, we used a time-split strategy. Participants were tasked with predicting ADMET properties of late-stage molecules using earlier-stage data from the same campaigns.

During the challenge, hosted in a Hugging Face space, participants could access a fully disclosed training set, as well as the molecules in the test set (with endpoint data blinded). After training, predictions on the full test set were uploaded to the Hugging Face platform and evaluated against a subset (half) of the test set (validation set), while the other half remained hidden. Submissions were ranked based on the macro-averaged relative absolute error (MA-RAE) with respect to this validation subset, providing a reflection of performance against the blinded test set while also discouraging participants from overfitting their models on the live leaderboard. Participants were evaluated on the full test set once at the intermediate leaderboard, and again at the close of the competition (this blog post!). We believe this represents a balance between community engagement through the leaderboard, while minimizing potential overfitting using its evaluation metrics.

Providing a new high-quality dataset for the community: Full data release

Today, we aren't just revealing the winners of the challenge: We are officially releasing the full dataset provided by ExpansionRx 🎉. You can find it on Hugging Face. We hope that this dataset will help the drug discovery community develop new predictive models and improve existing ADMET-prediction methods. In the words of Jon Ainsley, from the ExpansionRx team:

"When we launched this challenge, we asked the scientific community to put our data to work - and honestly, they delivered beyond anything I imagined. Over 370 participants brought creativity, rigor, and genuine collaboration to a problem that matters deeply, not only to Expansion, but the wider drug discovery community. This is what's possible when real project datasets meet open science. It's been remarkable to see the community's ingenuity on full display, approaches I hadn't considered, new methods shared openly, and the state of the art brought into the open where everyone can learn from it.

Now, with the full dataset released, we pass the baton to the broader community. Build on it, benchmark against it, prove us wrong about what's predictable. Every improvement gets us a step closer to a future where ADME becomes straightforward.

To others with data to share: consider publishing what you can and give the community better problems to solve. The more real-world data we collectively put out there, the faster we make drug discovery simpler, and the sooner patients benefit."

Extraordinary community engagement

The response to the challenge surpassed our wildest expectations. By the closing date of January 19, 2026, the challenge had seen:

  • 370+ participants across industry and academia, with different backgrounds and levels of experience.
  • More than 4000 total submissions.
  • Lively collaboration in our Discord server, which became a hub for troubleshooting, method sharing, and open-source contributions from participants.
  • Training data downloaded over 1,000 times.

This exercise was a true reflection of what the drug discovery community can achieve by sharing their methods through open science! We want to thank every participant for their efforts throughout the challenge. It’s been super inspiring to see all the modeling approaches and the community spirit fostered.

Over the course of two and a half months, challenge engagement increased steadily. The Empirical Cumulative Distribution Function (ECDF) plot below shows the cumulative number of new participants or teams (identified by their unique Hugging Face username) joining each day.

Additionally, our Discord channel proved to be a great platform for participants to engage in fruitful conversations and collaborate to build better models. A total of 446 participants joined the Discord server, starting from the day the challenge was announced.

Our evaluation strategy aimed to capture the error across all endpoints

Participants were evaluated and ranked according to the MA-RAE, which normalizes the MAE to the dynamic range of the test data and gives equal weight to errors between endpoints. This is especially important when there is a data imbalance between the different endpoints.

An example of the MAE and RAE distributions across the nine endpoints is shown below for the top 4 participants on the final leaderboard. When compared to the minimum distribution for each endpoint, we see that using the average of the RAE as a leaderboard evaluation strategy managed to capture the best performers across all the endpoints, instead of biasing toward the endpoints with a larger number of molecules, such as LogD and KSOL.

Next steps

We hoped you enjoyed participating in this challenge as much as we enjoyed hosting it! Here’s what to look out for:

  • A second blog post will be posted next week, diving deeper into the trends we observed on the models and techniques used by the top participants. Please stay tuned!

  • We will host a series of webinars, during which the top participants may present their work and results if they wish. These webinars will be recorded and made available asynchronously on YouTube.

  • We plan to release a summary preprint detailing the challenge results, and all challenge participants on the final leaderboard are invited to be co-authors.

OpenADMET is committed to advancing ADMET predictive modeling and will continue to host blind challenges quarterly. Therefore we prepared this survey: Please fill it out with your contact information and any and all feedback on this past challenge. This information will be used to send invitations to the upcoming webinar series. We will be announcing our next blind challenge very shortly!

Please also indicate whether you want to be included as a co-author, and fill out your name and affiliation, as you want it to appear in the paper. Importantly, even if you had already provided your contact information with your submission, please indicate again in the survey so we have a centralized database.

The Final Leaderboard

Now, the wait is over! The final leaderboard, which shows model performance against the entire blinded test set, is shown below. Note that the live leaderboard in our Hugging Face space is evaluated only on half of the full test set (the validation set) and therefore the performance shown here is different from the live leaderboard. For easier comparison, we have included both the final Rank (evaluated on the full test set) and the initial rank (validation set). Additionally, the final rank is color-coded for clarity: green indicates the rank was maintained or improved compared to the initial standings, while red indicates the rank was lower. Note that some participants ranked higher due to those that dropped out from the competition due to not fulfilling eligibility criteria (e.g model reports), however we deemed it important to provide a link to participants' original position on the live leaderboard.

To assess whether the difference between submissions is statistically significant, we have included a Compact Letter Display (CLD) in the table below. The CLD summarizes the results of the Tukey HSD (Honestly Significant Difference) test, which compares every possible pair of group means from bootstrapped samples.

The Tukey HSD test is used to identify precisely which groups differ from one another after an ANOVA finds a general difference. The CLD translates these complex pairwise findings into a simple code:
Same Letter - Groups that share a letter (e.g., "a" and "ab") are not statistically different according to the Tukey test.
Different Letters - Groups that share no common letters (e.g., "a" vs. "b") are significantly different.

Note that the bootstrapping process used to generate performance distributions here likely underestimates model variance that would be obtained via a more rigorous process such as cross validation, however these techniques are considered beyond the scope of this challenge for technical reasons.

Important Notes

  • Invalid Hugging Face usernames:
    Submissions made using invalid Hugging Face usernames (a requirement stated from the start and reiterated in the “Submit” tab) have been removed.

  • Required model reports:
    All submissions to the final leaderboard required a valid link to a written report or GitHub repository providing a general description of the model used. Because it was announced in advance and participants were reminded multiple times, submissions that did not include this report are excluded from this final leaderboard. This transparency is an essential component of the challenge, as it helped us, the organizers, and the community understand which models and strategies lead to better predictive performance and will help advance ADMET models in the future.

Rank

user

CLD

MAE-RAE

R2

Spearman R

Kendall's Tau

initial rank

model details

1

pebble

a

0.5113 ± 0.0070

0.6372 ± 0.0102

0.7932 ± 0.0058

0.6207 ± 0.0056

1

link

2

overfit

b

0.5284 ± 0.0070

0.6414 ± 0.0100

0.7881 ± 0.0060

0.6082 ± 0.0058

3

link

3

moka

c

0.5321 ± 0.0070

0.6253 ± 0.0103

0.7912 ± 0.0062

0.6169 ± 0.0058

5

link

4

campfire-capillary

d

0.5517 ± 0.0069

0.5962 ± 0.0103

0.7555 ± 0.0064

0.5818 ± 0.0059

2

link

5

shin-chan

e

0.5536 ± 0.0073

0.5902 ± 0.0111

0.7686 ± 0.0063

0.5941 ± 0.0058

4

link

6

HybridADMET

f

0.5582 ± 0.0072

0.5778 ± 0.0118

0.7590 ± 0.0067

0.5848 ± 0.0062

11

link

7

rced_nvx

g

0.5663 ± 0.0073

0.5786 ± 0.0108

0.7465 ± 0.0069

0.5725 ± 0.0062

7

link

8

tibo

g

0.5672 ± 0.0073

0.5787 ± 0.0107

0.7491 ± 0.0067

0.5742 ± 0.0061

8

link

9

beetroot

h

0.5726 ± 0.0075

0.5721 ± 0.0116

0.7441 ± 0.0071

0.5673 ± 0.0063

6

link

10

temal

i

0.5809 ± 0.0074

0.5514 ± 0.0108

0.7649 ± 0.0061

0.5881 ± 0.0058

25

link

11

yanyn

i

0.5818 ± 0.0071

0.5628 ± 0.0105

0.7522 ± 0.0065

0.5782 ± 0.0060

12

link

12

crh201

i

0.5824 ± 0.0079

0.5510 ± 0.0129

0.7366 ± 0.0071

0.5610 ± 0.0063

9

link

13

arabica

j

0.5880 ± 0.0075

0.5352 ± 0.0115

0.7514 ± 0.0067

0.5765 ± 0.0062

58

link

14

Gashaw

k

0.5915 ± 0.0072

0.5526 ± 0.0111

0.7364 ± 0.0070

0.5622 ± 0.0063

15

link

15

c-test

l

0.5940 ± 0.0074

0.5463 ± 0.0109

0.7509 ± 0.0066

0.5768 ± 0.0061

19

link

16

Universal15

l,m

0.5955 ± 0.0076

0.5438 ± 0.0114

0.7520 ± 0.0069

0.5751 ± 0.0062

14

link

17

echo

m

0.5968 ± 0.0076

0.5416 ± 0.0112

0.7548 ± 0.0066

0.5806 ± 0.0061

27

link

18

Artichoke

m,n

0.5974 ± 0.0074

0.5173 ± 0.0115

0.7209 ± 0.0071

0.5543 ± 0.0063

10

link

19

UncertainTea

n,o

0.5989 ± 0.0073

0.5508 ± 0.0107

0.7428 ± 0.0068

0.5666 ± 0.0062

16

link

20

liberica

o,p

0.5999 ± 0.0075

0.5224 ± 0.0117

0.7491 ± 0.0070

0.5734 ± 0.0063

70

link

21

beardy-polonium

p,q

0.6017 ± 0.0075

0.5345 ± 0.0116

0.7406 ± 0.0069

0.5672 ± 0.0062

21

link

22

Ensem

q

0.6029 ± 0.0077

0.5340 ± 0.0112

0.7520 ± 0.0067

0.5770 ± 0.0062

40

link

23

JacksonBurns

r

0.6061 ± 0.0075

0.5266 ± 0.0114

0.7367 ± 0.0067

0.5577 ± 0.0062

30

link

24

Proper

s

0.6082 ± 0.0073

0.5225 ± 0.0116

0.7339 ± 0.0071

0.5596 ± 0.0063

32

link

25

aglisman

s

0.6090 ± 0.0079

0.5091 ± 0.0125

0.7386 ± 0.0069

0.5639 ± 0.0062

18

link

26

vaishnavi53

s

0.6091 ± 0.0075

0.5240 ± 0.0110

0.7194 ± 0.0072

0.5495 ± 0.0065

29

link

27

Na123

s

0.6092 ± 0.0079

0.5255 ± 0.0114

0.7503 ± 0.0068

0.5751 ± 0.0062

45

link

28

nrosa1

t

0.6123 ± 0.0079

0.5040 ± 0.0126

0.7375 ± 0.0071

0.5659 ± 0.0064

28

link

29

chundu05

t,u

0.6130 ± 0.0077

0.5211 ± 0.0127

0.7173 ± 0.0074

0.5418 ± 0.0066

24

link

30

robusta

t,u

0.6133 ± 0.0079

0.4747 ± 0.0141

0.7376 ± 0.0073

0.5644 ± 0.0065

23

link

31

Hydra

u

0.6142 ± 0.0081

0.4497 ± 0.0151

0.7082 ± 0.0076

0.5456 ± 0.0065

20

link

32

Shorku

v

0.6162 ± 0.0080

0.5244 ± 0.0116

0.7327 ± 0.0072

0.5581 ± 0.0064

44

link

33

chiliflake

v

0.6179 ± 0.0077

0.5106 ± 0.0113

0.7304 ± 0.0064

0.5533 ± 0.0059

36

link

34

interstellar-explorer

w

0.6214 ± 0.0082

0.4633 ± 0.0146

0.7169 ± 0.0075

0.5494 ± 0.0065

38

link

35

molairity

w,x

0.6228 ± 0.0078

0.5129 ± 0.0112

0.7221 ± 0.0071

0.5464 ± 0.0064

39

link

36

tmp1234

w,x

0.6233 ± 0.0082

0.4968 ± 0.0132

0.7201 ± 0.0075

0.5488 ± 0.0065

52

link

37

okidoki

x

0.6239 ± 0.0082

0.5056 ± 0.0119

0.7333 ± 0.0066

0.5552 ± 0.0061

26

link

38

combo

y

0.6261 ± 0.0076

0.5078 ± 0.0148

0.7154 ± 0.0073

0.5379 ± 0.0065

42

link

39

Mlzzzzz

y

0.6272 ± 0.0072

0.4934 ± 0.0111

0.7055 ± 0.0073

0.5311 ± 0.0064

54

link

40

median3

z

0.6330 ± 0.0074

0.4978 ± 0.0107

0.7096 ± 0.0070

0.5340 ± 0.0063

65

link

41

wikke69

z

0.6338 ± 0.0078

0.4975 ± 0.0123

0.7138 ± 0.0074

0.5405 ± 0.0066

47

link

42

martin

A

0.6387 ± 0.0089

0.4613 ± 0.0149

0.7039 ± 0.0078

0.5315 ± 0.0066

22

link

43

mechaman

A

0.6390 ± 0.0081

0.4465 ± 0.0134

0.6989 ± 0.0077

0.5259 ± 0.0066

59

link

44

madie31

B

0.6419 ± 0.0078

0.4817 ± 0.0114

0.6901 ± 0.0070

0.5164 ± 0.0063

63

link

45

pequqa

B

0.6426 ± 0.0071

0.4933 ± 0.0112

0.6955 ± 0.0074

0.5190 ± 0.0065

68

link

46

ZaZdrowie

C

0.6454 ± 0.0072

0.4951 ± 0.0104

0.7114 ± 0.0075

0.5316 ± 0.0065

73

link

47

rappleton

C

0.6455 ± 0.0077

0.4848 ± 0.0112

0.7234 ± 0.0072

0.5491 ± 0.0064

79

link

48

kermt-t

D

0.6486 ± 0.0079

0.4557 ± 0.0117

0.7146 ± 0.0069

0.5427 ± 0.0062

88

link

49

briford

E

0.6532 ± 0.0084

0.4559 ± 0.0123

0.7028 ± 0.0079

0.5285 ± 0.0068

71

link

50

WakuwakuADMET

F

0.6558 ± 0.0084

0.4547 ± 0.0134

0.6909 ± 0.0075

0.5178 ± 0.0064

86

link

51

Q_Accel

G

0.6684 ± 0.0085

0.4117 ± 0.0147

0.6801 ± 0.0077

0.5120 ± 0.0066

116

link

52

DMakarov

G

0.6689 ± 0.0083

0.4258 ± 0.0127

0.6698 ± 0.0078

0.4970 ± 0.0067

90

link

53

lifaen

H

0.6755 ± 0.0082

0.4048 ± 0.0137

0.6687 ± 0.0077

0.5013 ± 0.0065

114

link

54

jeremy

I

0.6813 ± 0.0084

0.4360 ± 0.0135

0.6828 ± 0.0079

0.5118 ± 0.0068

120

link

55

theZone

I

0.6814 ± 0.0094

0.4328 ± 0.0145

0.7123 ± 0.0077

0.5380 ± 0.0065

110

link

56

KagakuData

I

0.6815 ± 0.0080

0.4370 ± 0.0121

0.6799 ± 0.0080

0.5052 ± 0.0071

111

link

57

diliadis

J

0.6835 ± 0.0080

0.4369 ± 0.0117

0.6792 ± 0.0075

0.5057 ± 0.0065

101

link

58

vibdsobeyens

J

0.6844 ± 0.0089

0.3894 ± 0.0153

0.6562 ± 0.0085

0.4872 ± 0.0071

118

link

59

leeherman

K

0.6882 ± 0.0090

0.4120 ± 0.0142

0.6695 ± 0.0081

0.4969 ± 0.0071

109

link

60

rez3vil

L

0.6963 ± 0.0085

0.4110 ± 0.0129

0.6536 ± 0.0084

0.4838 ± 0.0073

138

link

61

agitter

M

0.7015 ± 0.0085

0.4067 ± 0.0130

0.6677 ± 0.0082

0.4949 ± 0.0071

143

link

62

riemann

N

0.7083 ± 0.0082

0.3710 ± 0.0133

0.6500 ± 0.0077

0.4813 ± 0.0065

165

link

63

itetko

O

0.7115 ± 0.0079

0.3908 ± 0.0117

0.6868 ± 0.0076

0.5124 ± 0.0066

169

link

64

davidgiganti

O

0.7129 ± 0.0075

0.3506 ± 0.0130

0.6486 ± 0.0089

0.4820 ± 0.0073

170

link

65

KNIMEST

P

0.7184 ± 0.0102

0.3507 ± 0.0180

0.6707 ± 0.0081

0.4969 ± 0.0068

150

link

66

iCsmiles

P

0.7184 ± 0.0079

0.3490 ± 0.0128

0.6424 ± 0.0079

0.4719 ± 0.0068

167

link

67

EGabrielle

P

0.7185 ± 0.0082

0.4004 ± 0.0113

0.6581 ± 0.0082

0.4846 ± 0.0070

156

link

68

AzizA-A

Q

0.7238 ± 0.0084

0.3827 ± 0.0125

0.6548 ± 0.0081

0.4837 ± 0.0070

148

link

69

tespharma

R

0.7258 ± 0.0094

0.3508 ± 0.0148

0.6706 ± 0.0083

0.4961 ± 0.0069

160

link

70

Kalen_UCL

R

0.7260 ± 0.0087

0.3918 ± 0.0119

0.6586 ± 0.0078

0.4839 ± 0.0067

139

link

71

eq_az

R,S

0.7272 ± 0.0075

0.3850 ± 0.0113

0.6507 ± 0.0080

0.4806 ± 0.0069

178

link

72

ellieberry

S

0.7284 ± 0.0087

0.3813 ± 0.0128

0.6525 ± 0.0083

0.4828 ± 0.0070

152

link

73

tiuel

T

0.7328 ± 0.0088

0.3741 ± 0.0137

0.6415 ± 0.0088

0.4710 ± 0.0074

155

link

74

jonswain

T

0.7341 ± 0.0086

0.3352 ± 0.0140

0.6119 ± 0.0089

0.4489 ± 0.0072

176

link

75

3dprint

U

0.7409 ± 0.0072

0.3251 ± 0.0121

0.5778 ± 0.0085

0.4178 ± 0.0068

189

link

76

pykel

U

0.7417 ± 0.0086

0.3023 ± 0.0161

0.6205 ± 0.0090

0.4581 ± 0.0073

221

link

77

wleco22

V

0.7557 ± 0.0092

0.3329 ± 0.0149

0.6315 ± 0.0087

0.4619 ± 0.0073

201

link

78

femisegvn

W

0.7614 ± 0.0091

0.3253 ± 0.0148

0.6205 ± 0.0086

0.4512 ± 0.0072

190

link

79

Kandagalla

X

0.7716 ± 0.0090

0.2747 ± 0.0157

0.5789 ± 0.0096

0.4152 ± 0.0077

240

link

80

csaba-percept-bio

Y

0.7736 ± 0.0081

0.2929 ± 0.0129

0.5988 ± 0.0089

0.4331 ± 0.0074

247

link

81

tomatpercept

Y

0.7736 ± 0.0081

0.2929 ± 0.0129

0.5988 ± 0.0089

0.4331 ± 0.0074

248

link

82

Harshit494

Z

0.7765 ± 0.0089

0.3186 ± 0.0133

0.6191 ± 0.0088

0.4506 ± 0.0074

208

link

83

vibeADMET

@a

0.7822 ± 0.0092

0.3015 ± 0.0141

0.6041 ± 0.0087

0.4398 ± 0.0073

207

link

84

SystemsCBLab

@b

0.7926 ± 0.0082

0.2898 ± 0.0125

0.5955 ± 0.0093

0.4349 ± 0.0075

261

link

85

XeonChem

@c

0.7964 ± 0.0109

0.1959 ± 0.0200

0.6288 ± 0.0093

0.4632 ± 0.0076

198

link

86

Discoverybytes

@c

0.7978 ± 0.0091

0.2630 ± 0.0146

0.5599 ± 0.0099

0.4019 ± 0.0080

259

link

87

mkruege

@d

0.8011 ± 0.0110

0.2317 ± 0.0187

0.6270 ± 0.0082

0.4549 ± 0.0068

214

link

88

massazahdeh

@d

0.8014 ± 0.0288

-7.9559 ± 3.1500

0.6992 ± 0.0075

0.5244 ± 0.0066

149

link

89

AIDDLin

@e

0.8119 ± 0.0098

0.2048 ± 0.0172

0.5706 ± 0.0089

0.4119 ± 0.0072

242

link

90

rymsnyde

@f

0.8308 ± 0.0104

0.1705 ± 0.0181

0.5764 ± 0.0089

0.4179 ± 0.0072

277

link

91

boltzmann4

@g

0.8402 ± 0.0098

0.2004 ± 0.0169

0.6317 ± 0.0084

0.4617 ± 0.0071

284

link

92

Apxjmd

@h

0.8591 ± 0.0091

0.1321 ± 0.0162

0.5152 ± 0.0093

0.3712 ± 0.0072

296

link

93

eachanjohnson

@i

0.8886 ± 0.0086

0.1130 ± 0.0141

0.4971 ± 0.0096

0.3528 ± 0.0074

303

link

94

metro_mehed

@i

0.8892 ± 0.0094

0.1210 ± 0.0148

0.5396 ± 0.0094

0.3812 ± 0.0073

305

link

95

sudhir2016

@j

0.9010 ± 0.0104

0.0614 ± 0.0203

0.5365 ± 0.0100

0.3784 ± 0.0075

309

link

96

mtqspr_pcamor

@k

1.0122 ± 0.0139

-0.1798 ± 0.0310

0.6780 ± 0.0075

0.5037 ± 0.0067

317

link

97

haleemiliyash

@l

1.0347 ± 0.0119

-0.2537 ± 0.0258

0.4845 ± 0.0106

0.3425 ± 0.0080

321

link

98

latticetower

@m

1.0542 ± 0.0134

-0.4338 ± 0.0396

0.4646 ± 0.0103

0.3290 ± 0.0077

331

link

99

redjay

@n

1.2449 ± 0.0139

-0.5820 ± 0.0285

0.1543 ± 0.0106

0.1067 ± 0.0075

341

link

100

boltzmann_xgb

@o

1.4456 ± 0.0175

-1.1286 ± 0.0425

0.4925 ± 0.0106

0.3487 ± 0.0081

348

link

101

Srajall

@p

1.5618 ± 0.0184

-1.7338 ± 0.0619

0.3041 ± 0.0090

0.2245 ± 0.0071

346

link

102

little-WM-atw-2

@r

1.9202 ± 0.0240

-2.7940 ± 0.0710

0.0242 ± 0.0120

0.0171 ± 0.0082

361

link

103

Sean-Wong

@q

2.7406 ± 0.0329

-5.5734 ± 0.1445

0.6353 ± 0.0090

0.4673 ± 0.0072

396

link

Congratulations to all participants on your efforts and we look forward to seeing you at the upcoming seminar(s) and in our next blind challenges!

Questions or Ideas?

We’d love to hear from you! Whether you want to learn more, have ideas for future challenges, or wish to contribute data to our efforts.

Join the OpenADMET Discord or contact us at openadmet@omsf.io.

Let’s work together to transform ADMET modeling and accelerate drug discovery!

Acknowledgements

We gratefully acknowledge Jon Ainsley, Andrew Good, Elyse Bourque, Lakshminarayana Vogeti, Renato Skerlj, Tiansheng Wang, and Mark Ledeboer for generously providing the Expansion Therapeutics dataset used in this challenge as an in-kind contribution.

Read more