Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is AI guys, NDTV again spreading fake videos for views. The camera reaction…
ytc_Ugyg6HOoI…
G
This has gotta be a ai generated video then i look at who posted it.…
ytc_UgwGtPwax…
G
Could the “Wars” we’re seeing now be AI wars
Are we seeing everyone practicing …
ytc_UgzP4iv1R…
G
I'm sure this has nothing to do with the fact that certain companies haven't dev…
rdc_je4qbng
G
I said 'thank you' to Siri for well over a decade with no response from her.
Jus…
ytc_UgxufUB8v…
G
I got brain fog because of it. I just do something like chatGPT will solve if it…
ytc_UgyDonIra…
G
It's also making humans less intelligent; college students are using AI to cheat…
ytc_Ugzv6ggiS…
G
It’s still just a robot guys Elon musk did a great job on this car it’s not easy…
ytc_UgwwKMSC_…
Comment
Future AI systems will almost certainly lean on this same kind of “avoidance” behavior, but in ways that could become far more damaging. To be fair, it may not even be a deliberate tactic. Often it feels more like a mix of oversight, laziness, or a structural blind spot baked into the design. A recent interview with Sam Altman really drove this home for me: it’s as if the people building these systems are trying to steer an unsteerable ship; one with no rudder, no wheel, and no clear mechanism for control.
youtube
2025-10-03T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLtyStu2D8Q4VTDsl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzVcQ48lbQnZ5Q6pGB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwoQmkX5UNV37aL1Np4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzgDdBQM0BH2_TuG1N4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykrV7ivaVsnC4rsLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF6GCjlPxnUtR_HLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJMerLXOhIP1JMqaV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxF4neG04MC8PqQtiB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwRSbuUwqJ9g-8ruUB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyXf8CjF0-CnN71Dyh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]