lhallee commited on
Commit
6497af1
·
verified ·
1 Parent(s): bdb4649

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -42,7 +42,7 @@ print(logits.shape) # (batch_size, num_labels), (2, 2)
42
  ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
43
  ```python
44
  import torch
45
- model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_small', trust_remote_code=True, torch_dtype=torch.float16) # or torch.bfloat16
46
  ```
47
 
48
  ## Embed entire datasets with no new code
@@ -157,9 +157,9 @@ The most gains will be seen with PyTorch > 2.5 on linux machines.
157
  If you use any of this implementation or work please cite it (as well as the ESMC preprint).
158
 
159
  ```
160
- @misc {ESM++,
161
  author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
162
- title = { ESM++: Efficient and Hugging Face compatible versions of the ESM Cambrian models},
163
  year = {2024},
164
  url = { https://huggingface.co/Synthyra/ESMplusplus_small },
165
  DOI = { 10.57967/hf/3726 },
 
42
  ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
43
  ```python
44
  import torch
45
+ model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_small', trust_remote_code=True, dtype=torch.float16) # or torch.bfloat16
46
  ```
47
 
48
  ## Embed entire datasets with no new code
 
157
  If you use any of this implementation or work please cite it (as well as the ESMC preprint).
158
 
159
  ```
160
+ @misc {FastPLMs,
161
  author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
162
+ title = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
163
  year = {2024},
164
  url = { https://huggingface.co/Synthyra/ESMplusplus_small },
165
  DOI = { 10.57967/hf/3726 },