Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI stans are like those kids at school who notice you can draw, tell you to draw…
ytc_UgyG9E5Rn…
G
If he really wants to copyright something he can make a book collecting together…
ytc_Ugy3E2aGH…
G
@MrNavinBossno it is not better they both are excellent but you have to know h…
ytr_UgyTKSjzw…
G
He is mad that EU has banned AI manipulation of people's opinions and states of …
ytc_UgxLs4up-…
G
Sentience isn't just complex output or language fluency, it's the emergence of i…
ytc_UgxgQ37yG…
G
"I don't have the talent for it" then learn! it kind of makes me frustrated that…
ytc_Ugw5Q0Hef…
G
The term AI is disingenious. It's literally just mathematical Machine Learning. …
rdc_mnpc2s6
G
as far as waymo safety is concerned:
imho, we should be incredibly skeptical t…
ytc_UgyNdu5z0…
Comment
The strangest thing here is that they're talking about what going to happen and giving all this scary numbers 99% unemployment etc. But where is the remark that this will happen only if there is no Force will be applied to stop it. Government laws, Agreements between countries, Wars by the end.
I don't think when this will go closer to get out of control that is everyone will be ignoring the fact. You know there is nuclear bombs, that is good, and bad right?
But we are not applying them and controlling the every movement when it's potentially could of resulted serious impact on human. In different words, there is a lot of the things that could go wrong in building super intelligent AI. In my opinion it have to be strictly controlled.
youtube
AI Governance
2025-09-05T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtMZ498dGVfo_bcHd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwn4LMAaKJFfknwwI54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy736Rkwl_EJBQ7tyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQ_XSNGfoAHITRtKV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyUAZPnKQPOmFODa_94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwyyrIbaG3NGt7l-Q94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugysq7uIQRYlKcYFRO94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQAc0zUuP3Vz60qzZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKf2QovsHBfISLLsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9QPKaz0BPMgRvwT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]