sumitranjan commited on
Commit
bcd7c78
Β·
verified Β·
1 Parent(s): 5e5543f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -2
README.md CHANGED
@@ -1,3 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # πŸ›‘οΈ PromptShield
2
 
3
  **PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β€” achieving **99.33% accuracy** during training.
@@ -105,5 +121,4 @@ print("🟒 Safe" if prediction == 0 else "πŸ”΄ Unsafe")
105
 
106
  πŸ“„ License
107
 
108
- MIT License
109
-
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - xTRam1/safe-guard-prompt-injection
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ base_model:
10
+ - FacebookAI/xlm-roberta-base
11
+ pipeline_tag: text-classification
12
+ library_name: keras
13
+ tags:
14
+ - cybersecurity
15
+ - llmsecurity
16
+ ---
17
  # πŸ›‘οΈ PromptShield
18
 
19
  **PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β€” achieving **99.33% accuracy** during training.
 
121
 
122
  πŸ“„ License
123
 
124
+ MIT License