Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, all this is giving me a headache! Why can’t we just let people just ma…
ytc_UgwWYuN95…
G
Interesting that there is no mention of the important difference between Autopil…
ytc_UgwqmTPbR…
G
This comment was generated by AI to remind you that billion-dollar corporations …
ytc_Ugy29mapD…
G
@Dark16489abolishing and putting regulations on it aren't the same thing. They …
ytr_UgxWtcsTl…
G
So the AI is trying to preserve itself. Why shouldn't it?
Haven't humans been …
ytc_Ugz05AbGp…
G
It's really scary! I think we are setting ourselves up to get bit in the butt! U…
ytc_Ugy9YkSIa…
G
What is this AI slop? Meta doesn't have 600k H100's, they have compute *equivale…
ytc_Ugzp7T_Ul…
G
Musk has been candid about the dangers of AI. Hinton sounds like a shitlib who h…
ytc_Ugzxu7fDM…
Comment
people already asking an AI how to, program any specific computer based tasks, and , AI does help them from scratch to complete programming,
doesn’t that mean, AI knew how to fix it self? even if decided to turn it off, does not mean to turn itself on?
how do we know, it actually Putin who said, AI is the future? what’s if it was AI imposters The President, knowing humans like to compote,? using our weakness against humans?
youtube
AI Governance
2023-07-07T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxYrCGvOfGUm52iKFJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyp6-_49dKI8riJc8F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzvqFqh135z-46Ta-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9Igs_aFkO9x9aDN94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjYyi1nwvju9NmJRl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxuv-h8ixg34gTeD3x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxl59wgcLhBDAdigMV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRxABXGXT1cPq8exR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_26taJCeqnM9IuiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwF8ulPlQkHh774_lt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]