Model Card for mBERT Manipulative Language Detector

ๆœฌๆจกๅž‹็”จไบŽๆฃ€ๆต‹ไธญๆ–‡ๅ’Œ่‹ฑๆ–‡ๆ–‡ๆœฌไธญ็š„ๆ“็บตๆ€ง่ฏญ่จ€๏ผˆManipulative Language๏ผ‰๏ผŒไพ‹ๅฆ‚้šๆ€งๆŽงๅˆถใ€ๆƒ…ๆ„Ÿๅ‹’็ดขใ€่ฏญ่จ€ๆ“ๆŽง็ญ‰๏ผŒๅนฟๆณ›ๅบ”็”จไบŽ็คพไบคๅฟƒ็†ใ€ๆ–‡ๆœฌ็ญ›ๆŸฅๅ’Œๅ†…ๅฎนๅฎกๆ ธ็ญ‰ๅœบๆ™ฏใ€‚

๐Ÿง  Model Details

  • Developed by: LilithHu
  • Finetuned from: google-bert/bert-base-multilingual-cased
  • Languages: ไธญๆ–‡ใ€่‹ฑๆ–‡
  • License: other
  • Model type: ๆ–‡ๆœฌๅˆ†็ฑปๆจกๅž‹๏ผˆbinary classifier: manipulative / non-manipulative๏ผ‰

๐Ÿ”ง Uses

โœ… Direct Use

  • ่พ“ๅ…ฅไธ€ๆฎตๆ–‡ๆœฌ๏ผŒๆจกๅž‹ๅฐ†่ฟ”ๅ›ž่ฏฅๆ–‡ๆœฌๆ˜ฏๅฆๅŒ…ๅซๆ“็บตๆ€ง่ฏญ่จ€ใ€‚
  • ๅฏ้€š่ฟ‡ Hugging Face Inference API ๆˆ– Web UI๏ผˆStreamlit๏ผ‰็›ดๆŽฅ่ฐƒ็”จใ€‚

๐Ÿ‘ฅ Intended Users

  • NLP ็ ”็ฉถ่€…
  • ๅ†…ๅฎนๅฎกๆ ธไปŽไธš่€…
  • ๅฟƒ็†ๅญฆ็ ”็ฉถไบบๅ‘˜
  • ็คพไบคๅนณๅฐๆˆ–ๅฏน่ฏ็ณป็ปŸๅผ€ๅ‘่€…

๐Ÿšซ Out-of-Scope Use

  • ๆœฌๆจกๅž‹ไธ้€‚ๅˆ็”จไบŽ๏ผš

    • ๆณ•ๅพ‹ๅฎกๅˆค
    • ๅŒป็–—่ฏŠๆ–ญ
    • ็ฒพๅ‡†่ฅ้”€็ญ‰้ซ˜้ฃŽ้™ฉๅ•†ไธš่กŒไธบ
    • ๅˆคๅฎšไป–ไบบๅŠจๆœบใ€ไบบๆ ผๆˆ–ๆƒ…ๆ„Ÿ

โš ๏ธ Bias, Risks and Limitations

่ฏทๆณจๆ„๏ผš

  • ๆจกๅž‹่พ“ๅ‡บไธ็ญ‰ไบŽไบ‹ๅฎž๏ผŒไป…ๅŸบไบŽ่ฎญ็ปƒๆ•ฐๆฎ็š„ๆจกๅผ่ฟ›่กŒๅˆ†็ฑป
  • ๆ“็บตๆ€ง่ฏญ่จ€็š„ๅˆคๆ–ญๅธฆๆœ‰ไธ€ๅฎšไธป่ง‚ๆ€งไธŽๆ–‡ๅŒ–ๅๅทฎ
  • ไธๅบ”่ขซ็”จไบŽ่ฏ„ๅˆคๅ…ทไฝ“ไธชไบบใ€ๆƒ…ๆ„Ÿๆˆ–่กŒไธบๆญฃๅฝ“ๆ€ง

โœ… ๅปบ่ฎฎ

ไฝฟ็”จ่€…ๅบ”็ป“ๅˆไบบๅทฅๅˆคๆ–ญ๏ผŒๅคšๆจกๆ€ใ€ๅคšๆธ ้“ๅœฐ็†่งฃๆ–‡ๆœฌๅซไน‰ใ€‚ๅฏนไบŽๆจกๅž‹้ข„ๆต‹็ป“ๆžœไธๅฏ็›ฒไฟก๏ผŒๅบ”่ง†ไธบ่พ…ๅŠฉๅทฅๅ…ทใ€‚

๐Ÿš€ How to Use

from transformers import pipeline
classifier = pipeline("text-classification", model="LilithHu/mbert-manipulative-detector")
result = classifier("ๆˆ‘็ˆฑไฝ ")
print(result)

ไนŸๅฏ้€š่ฟ‡็ปˆ็ซฏ่ฐƒ็”จ๏ผš

curl -X POST https://api-inference.huggingface.co/models/LilithHu/mbert-manipulative-detector \
  -H "Authorization: Bearer <your_hf_token>" \
  -H "Content-Type: application/json" \
  -d '{"inputs": "ๆˆ‘็ˆฑไฝ "}'

๐Ÿ‹๏ธ Training Details

๐Ÿ“š Training Data

  • CDial-GPT/toy_valid
  • thu-coai/esconvใ€cdconv ๆ•ฐๆฎ้›†
  • ่‡ชๅปบไธญๆ–‡ๆ“็บตๆ€ง่ฏญ่จ€่ฏญๆ–™๏ผˆๆœชๅ…ฌๅผ€๏ผ‰

โš™๏ธ Training Procedure

  • ่ฎญ็ปƒๅนณๅฐ๏ผšGoogle Colab๏ผŒGPU๏ผšT4
  • Epochs: 3
  • Batch size: 32
  • Optimizer: AdamW
  • LR: 2e-5

๐Ÿ“Š Evaluation

Metric Score
Accuracy 0.**
Precision 0.**
Recall 0.**
F1-score 0.**

๐ŸŒ Environmental Impact

  • ่ฎญ็ปƒๆ—ถ้—ด็บฆ 3 ๅฐๆ—ถ๏ผŒไฝฟ็”จ Google Colab GPU๏ผˆT4๏ผ‰
  • ไผฐ็ฎ—็ขณๆŽ’ๆ”พ < 2kg CO2eq

๐Ÿ”’ Disclaimer

  • ๆœฌๆจกๅž‹็”จไบŽ็ ”็ฉถไธŽๆ•™่‚ฒ็”จ้€”๏ผŒไธๅพ—ไฝœไธบๆณ•ๅพ‹ใ€้“ๅพทใ€ๅŒป็–—ๆˆ–ๅ•†ไธšๅˆคๆ–ญไพๆฎใ€‚
  • ้ข„ๆต‹็ป“ๆžœไป…ไธบๅ‚่€ƒ๏ผŒไฝฟ็”จ่€…้œ€่‡ช่กŒๆ‰ฟๆ‹…้ฃŽ้™ฉใ€‚
  • ่ฏทๅ‹ฟ็”จไบŽๆถๆ„ๆ”ปๅ‡ปใ€่ˆ†ๆƒ…ๆ“็บตๆˆ–่ฏฏๅฏผไป–ไบบ่กŒไธบใ€‚

๐Ÿ“Œ Model Card Authors

LilithHu

๐Ÿ“ฌ Contact

ๅฆ‚้œ€ๅ้ฆˆๅปบ่ฎฎ๏ผŒ่ฏท้€š่ฟ‡ Hugging Face ็•™่จ€่”็ณปไฝœ่€…ใ€‚

๐Ÿ“š Citation

@misc{LilithHu2025,
  title={mBERT Manipulative Language Detector},
  author={LilithHu},
  year={2025},
  url={https://huggingface.co/LilithHu/mbert-manipulative-detector}
}
Downloads last month
36
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 1 Ask for provider support

Model tree for LilithHu/mbert-manipulative-detector

Finetuned
(928)
this model

Dataset used to train LilithHu/mbert-manipulative-detector

Space using LilithHu/mbert-manipulative-detector 1