Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI told me I don't need to know what's coming so I should stop watching TWF ... …
ytc_UgzqSclAT…
G
@gu4x622 The entire point of a ghost kitchen is you don't know it's a ghost kitc…
ytr_UgwQ4CV72…
G
There is a at least one Rhino farmer in SA that has a huge stockpile of Rhino ho…
rdc_deuhe7i
G
Am I a bad person for not really having a problem with AI art? Like, if I discov…
ytc_Ugy32vVsI…
G
Uh huh, well some sucker still has to dig holes for a living, and I don't see AI…
ytc_UgynkUcUc…
G
When Stuart says "These systems would rather press a nuclear button that be swit…
ytc_UgxWfqSlC…
G
No. it is not similar, because the AI does not think, and does not learn anythin…
ytr_UgzeU7kPa…
G
Telsa are so safe that they are not included in many trials of self driving in E…
ytr_UgxiuWGvq…
Comment
Who knows enough about it to regulate it ? Only the developers know where it is going, how they have developed it, what information has been fed to AI, which Politics AI has ? Great discussion, alarm bells ringing , but those in AI are not going to self regulate, they are not going to restrict their own product development and careers. It is already at a level of analysis many times faster than humans. The only regulation that may be possible is of where it can be used and what for, NOT HOW IT FUNCTIONS.
youtube
AI Governance
2023-04-20T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzthL0s5Is2nmIkR094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeNjXjdYeCwOgECkB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvzgqDSQkGQQOQF0J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCMsVQROjuJJL0Cz94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzkpkgNT2WZOR-CMHZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIgbF29pYS9sPbQVx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybxRvVCoEE_dA1D1B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzQHxVCdjeaL0o8UWJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxxAa6yUhxIZXmEcZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzrY88rC19UETnJ2Cd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]