Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't mind AI (it's quite fun to play with it) ... however we really need to d…
ytc_UgyMiKhW1…
G
I’m still confused as to why losing jobs is a bad thing? Wouldn’t we just restru…
ytc_UgzZ9zzCw…
G
OP's title is a misrepresentation not only of the thrust of the article, but of …
rdc_jkfoxq0
G
I was on character ai. I love playing around on there and doing role plays. I wa…
ytc_Ugw9zEs4c…
G
Nuclear weapons are a whole lot more than just death. A full out nuclear war wou…
rdc_mbv3tib
G
he acts like the 'every man' but hes the same as the rest of them.... hes a mult…
ytc_Ugx7KKVzK…
G
The US has brain farts, it never has any kind of plan or strategy beyond lie, sc…
ytc_Ugz8ZOjsB…
G
Thank you for your comment! If you're interested in AI discussions, feel free to…
ytr_UgzA6Qjd4…
Comment
There is so much misinformation here it is hilariously wrong. No, an AI cannot learn in seconds what takes a human 20 years. No, if one AI learns something all the others do not know it instantly. No, the OpenAI revolt was not because they feared the AI they were creating was not constrained (it was because he lied and misled people, for example the board of directors did not know about a major release of a new model until they read it in the news). And more and more and more.
youtube
AI Governance
2026-03-18T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx9Uo6xGrA6Wj5p8CN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwD1QEdBx3wcbqGEOl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzskbkhiKxO07LHJPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfjVnI3o49UZNjFWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYInf0ag1DdNOh_oh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwi9X0ptTn05TSxr8d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgySoDsqDDWpOr97Qrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxL-Y4g5I1F3Pyt9gV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHpjiROB1qnLI5x7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyc63MHi5YwGcL_JQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]