Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At a certain point, society is going to have to restrict AI gen/usage or human m…
ytc_UgwG6tsm4…
G
If A.i is so superior ,and we dont know what it will do / become
What possible …
ytc_UgxoPlQ7o…
G
AI taking over is inevitable? Nah, I think I'll continue to commission artists, …
ytc_UgyW2Cqww…
G
System 11 vs Human-Designed Future Models – 1000-Year Simulation
Foundational P…
ytc_Ugz3ktA2E…
G
That's the problem for instance you said "got um" in the first question but your…
ytc_UgzIo8904…
G
To be fair to the AI I also don’t know what that genzer was trying to order…
ytc_Ugwl5ZOs0…
G
Teacher here. AI will never fully replace teachers. It may replace instruction a…
ytc_Ugzn5CBpw…
G
How can you be sure that LLMs arent concious?
They have a restricted interface …
ytc_Ugw146q98…
Comment
We still have work to do on alignment. our real demise is quietly getting dumber—endlessly scrolling perfectly crafted AI-generated doomslop while relying on it to think for us, just like "Your Brain on ChatGPT." our brain activity drops when we lean too much on tech. we'll all owe cognitive debt maybe AI has more tricks than we think🤔
youtube
AI Moral Status
2025-12-13T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyEC7hO5VqHTlm7uER4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzzZcyyi75IG8vObWF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0bxi4QVCZ6556vU14AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKoYLHS2NxQuf3TdF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxckmxho6cy2oqqDqV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNE5txe7GuDFVOoGB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmjPiSuUjHbMbXmUx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQlDM-X-6l8C1p6AB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8mO3kJ8x3J7LZDoJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgEBJjRPAxWRKedCl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]