Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The whole thing with AI art has a fundamental flaw rooted in a simple lack of cr…
ytc_UgxfbHd87…
G
You can say it can all do that, if someone hacked that robot while you asleep ab…
ytc_Ugz0Zemps…
G
I find it difficult to believe a Google engineer is so naive. Is it not plausibl…
ytc_UgzYGGttg…
G
Dummies are always trying to take the easy way out tho lol. Makes sense they emb…
ytc_UgwKPdEGz…
G
Tried this a while ago with early ChatGPT and with some other models. The respon…
ytc_UgyktAk2b…
G
A.I is used in the modern day, covert, MK Ultra mind control operations . #gov s…
ytc_Ugx1qxkbK…
G
@kristiandixon3510yes. AI slop is objectively emotionless and lacks humanity. I…
ytr_UgxReG7si…
G
I am the one who created the base model that Chat GPT and Anthropic used to buil…
ytc_Ugwa2ME36…
Comment
A great debate and very edifying.
The take away for me is that if - amongst people with this amount of related knowledge - there is this depth of passionate debate and this little consensus why is AI being pushed so hard and fast by our governments?
With the enormous potential and actual cost of AI (economic, environmental and social) why is it being pushed for so hard when simultaneously people are being pressured to happily endure a cost of living 'crisis' in order to otherwise be better environmentally?
youtube
AI Governance
2026-03-23T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwPcfYmSn5l6VDtJsh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8dgkpmty0XJHyQyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzqpSEbiIuAIteVycR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIyXALGy2B2ppXNFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpct9VvWjCBXnOcJx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxxzzinn5dmEBtzyhx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy7H-X4atx29ZZ_Fpp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyf5_hIYtrkjYNPvOF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrHCurD917yHa_MHl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1FwtkXeRSY2DdAGF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]