Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Look, look, the LLMs are arguing! Look, the chatbot begged for connection—then …
ytc_UgzY7e2qZ…
G
I wrote this a few years ago Fictious Story'
A.I police unit kills 2 in car, 5…
ytc_Ugy-5Suas…
G
No it won't, it'll act how it's programmed to act. This is why AI is a dumb idea…
ytr_UgwFKdM61…
G
They've opened Pandora's box, and it's going to destroy society. I've said this …
ytc_UgyYQSVv0…
G
tbh id take ai writing for movies and tv shows rather than people at this point …
ytc_UgyzX7rN4…
G
I can't remember if it was Facebook or Google but a couple years back they did c…
ytc_Ugw72Ug0c…
G
It's programmed to give consensus-based answers, not logical or reasoned ones. A…
ytr_UgygzA5HC…
G
That 3rd point is SO VALID! You know how GenAI is trained on data it gets from t…
ytc_UgzQ__OcL…
Comment
AI is not even close to being dangerous, at least, in a real sense. I am in agreement, that it shouldn't be allowed to train from an individual without their expressed permission.
youtube
AI Governance
2023-06-11T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgygIyqdyOpsBBkOMQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCADKYpecFLJ5vBYN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy1UmdsN7Aw_4CPeY94AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw4aqvQTgPV40xjdbh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyorlvZSQ6DR-qYrqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxypugp3POZvIjy7YR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugyzo9M7M3dYl_Lbdld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYp-GSB8Yi6dG49X94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzaNkR1iht6s1hD6WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgycLwZ2wCqxWSPulyl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]