Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Zac_Frost Sure, it may not be SkyNet, but it doesn't have to be that sophistica…
ytr_UgziWOaTS…
G
@loveboxinglucky1716 it's humans abusing it that are the problem, not the AI it…
ytr_UgzAf6soI…
G
In my opinion A.I should be used generously, A.I art IS only acceptable with gra…
ytc_Ugy3CXc-2…
G
The problem is not the AI, but the humans that controls and making those AI robo…
ytc_UgyElWTBg…
G
AI can not be bad if it is smart. It is only bad for bad people. Angles are bad …
ytc_Ugz7gmyXt…
G
The people in the West are just so desperate to project their own fictional dyst…
ytc_UgzSUlXOG…
G
If AI killed all rich people and was actually inteligent I would have been fine …
ytr_Ugz5Q1FOs…
G
Anyone using Ai for art or design should not call themself an artist. They're ju…
ytc_Ugzue64WC…
Comment
The call center people will rise to the top of the queue- sounds good. They'll get the calls when AI gives up. From my experience with vibe coding (using AI to interactively write code), AI doesn't give up. It just keeps coming up with new misinformation and regurgitating the old. Somebody (at the current state of the art) would have to be listening and intervene. AI has no way to know when it has failed.
youtube
AI Governance
2026-04-23T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzv_sUeIkIpB6R3Wid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWwQtWApFtH99yUVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiokLHVRIQkGY4EhF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9ij4biNZBHlbZpMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEKci8xKKmTyClAIt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3UIrDAGck8aWNV3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXQASHevScr9RXtZR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQc0f_fVvNnPleqfd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxX3OtmoYg2y3jDy8R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzjv8DJzLmOLKNEMyJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}]