Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have absolutely no idea whether this guy is trying to make fun of the AI "arti…
ytc_UgxAEqSIf…
G
We need to start building the Blade Runner AI, the AI that is trained and tasked…
ytc_UgxcKI-kF…
G
F#&k AI. We need to fight this! We get frustrated when we can't get a human on t…
ytc_Ugx2tX0Ak…
G
AI hype crash will come soon, probably next year or early 2027. Anybody who shor…
ytc_Ugw3-h6mu…
G
Even if you seen AI coming it came so fast it was nothing they could do in such …
ytc_UgzFMvF-K…
G
Could AI be given a 'hard-wired/software in ROM' moral core that scrutinised all…
ytc_UgxtXqfDu…
G
The study involving blackmail you did was disproven. The AI was instructed to op…
ytc_UgzsXVvMD…
G
GET THIS AI OFF MY SCREEN! (why is bro watch a video of how fast ai is moving)…
ytc_UgwZD9_q9…
Comment
And here's the fundamental trick the ai developers do not want you to know:
Ai has not accomplished ANY of the leaps of intelligence development. Only the illusion of doing so. WE have to make the intellectual leaps, not it. Because ai today requires our training to "learn".
youtube
2025-12-11T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw7hm9Amj-tFXxpmcV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-yCmll121bAU8ScR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoAo8RVO8EAi-na0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyXCVTaTzbYhr-t-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ8SeWtETFywEdDgx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIlOP21OViEwVThKN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzjSl9xsOk1dNv5Ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyTs3GARBIzwf7FMcd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxK-Mnl3Vy6rRwAx8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQAITBDJPNQluIZgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]