Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -42,7 +42,7 @@ print(logits.shape) # (batch_size, num_labels), (2, 2)
|
|
| 42 |
ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
|
| 43 |
```python
|
| 44 |
import torch
|
| 45 |
-
model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_small', trust_remote_code=True,
|
| 46 |
```
|
| 47 |
|
| 48 |
## Embed entire datasets with no new code
|
|
@@ -157,9 +157,9 @@ The most gains will be seen with PyTorch > 2.5 on linux machines.
|
|
| 157 |
If you use any of this implementation or work please cite it (as well as the ESMC preprint).
|
| 158 |
|
| 159 |
```
|
| 160 |
-
@misc {
|
| 161 |
author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
|
| 162 |
-
title = {
|
| 163 |
year = {2024},
|
| 164 |
url = { https://huggingface.co/Synthyra/ESMplusplus_small },
|
| 165 |
DOI = { 10.57967/hf/3726 },
|
|
|
|
| 42 |
ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
|
| 43 |
```python
|
| 44 |
import torch
|
| 45 |
+
model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_small', trust_remote_code=True, dtype=torch.float16) # or torch.bfloat16
|
| 46 |
```
|
| 47 |
|
| 48 |
## Embed entire datasets with no new code
|
|
|
|
| 157 |
If you use any of this implementation or work please cite it (as well as the ESMC preprint).
|
| 158 |
|
| 159 |
```
|
| 160 |
+
@misc {FastPLMs,
|
| 161 |
author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
|
| 162 |
+
title = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
|
| 163 |
year = {2024},
|
| 164 |
url = { https://huggingface.co/Synthyra/ESMplusplus_small },
|
| 165 |
DOI = { 10.57967/hf/3726 },
|