Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I need fellow artists to understand that people genuinely think they're entitled…
ytc_UgxVC5bP0…
G
Worst case scenario also includes corrective rape. There are too many risks to j…
rdc_djr0vhu
G
The car cannot defend itself in a court of law if a human does something high ri…
ytc_UgzQY-6PD…
G
Literally so many young kids seem to be using those ai chat apps to talk to thei…
ytr_Ugxk3Zr5h…
G
After all cars are made autonomous (which will come with time) the cars should b…
ytc_UggDy9xEJ…
G
If the algorithms really worked, most of the police force would be assigned to w…
ytc_UgzI7PMIH…
G
Hello, I’m gonna chat with you on counter area. What’s your name on character AI…
ytc_Ugzyujkxi…
G
@ I mean I guess but maybe you don’t really know the definition of tool: usually…
ytr_UgzuXqvJv…
Comment
Regarding intuitions and pursuing it, a brilliant AI developer was killed November of 2024. He raised concerns and alerted chatGPT and was later eliminated for being a whistleblower. Look up Suchir Balaji. People didn’t pay attention and media didn’t cover it because of the bias against legal immigration brewing at the time and Suchir’s courage and honesty didn’t get the desired result or attention.
youtube
AI Governance
2025-06-16T10:1…
♥ 127
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyJQKiX554Mqk_k9Rx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztoVaOePQSQxpFUFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwxIWF9p3zKggVtMTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwzgk7vT2408EDMSh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzslLXv0Un5e9InBE14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwYrj26y4CyzIsscMd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHuR7mWuVK2gAiWRF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzo6ut3Mm-g7p04l5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqrTwXKxAjcB5s-vx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwhdIQkttAjjCF9RwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]