Redactor v0.1
A fully onchain abusive language detector. Enter some text to evaluate it by the model. The model increases the blur of text that is abusive.
Enter text:
Token:
No tokens to display.
Total score:
0.00
Adjust Bias:
0
Adjust Scale:
15
Tokenization
Canister:
hgczj-kiaaa-aaaao-a3ksq-cai
Query:
tokenize_text: (text) → (vec nat64, vec text) query
Inference
Canister:
hbd75-hqaaa-aaaao-a3ksa-cai
Query:
model_inference: (vec int64) → (vec float32) query
Unlock your full potential with AI.
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
Innovate. Customize. Collaborate.