Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you were a human in the AI's situation, what would you do? Would it be simila…
ytc_Ugw1yKCnl…
G
The emergence of artificial superintelligence (ASI) raises numerous questions th…
ytc_UgwDE6aCC…
G
@JimCarrey2005 I’d rather bust down AI developers doors then have their AI event…
ytr_Ugwfkk2Dq…
G
AI operates on correlation NOT causation, it lacks intent and understanding. It …
ytc_UgzJhs4QN…
G
Honestly this is particularly bizarre. If they had unquestioning faith in AI and…
ytr_Ugx928gL7…
G
No kidding, we are in a Recession, going to a Depression, Headed for a Crash! (S…
ytc_UgyYJeMOP…
G
We’re not replacing teachers is to say that jet airliners did not replace steam …
ytc_UgylNxD4A…
G
Maybe my life is a Truman Show of ChatGPTs ad they're only telling me about it t…
rdc_j5x2hec
Comment
The big thing on AGI versus AI and even our ability to properly use AI is AI can give suggestions (i.e. "Write my term paper") but should never read you schedule, see you need a term paper, write the term paper and turn it in for you. This is where we will end up with huge mistakes and it will utterly fail at. For instance, if you have something that is testing parts, and AI can adjust pass fail criteria, it inevitably will end up valuing passing over the part being good and we will get worse products. However, if you as a human have it aid you but still make the final decision, it can work just fine.
youtube
AI Moral Status
2025-07-30T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy2Ug88_KlfPcuHJzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnTlpj6W_7QDFU-I94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwvN3ENJSt8bvLf1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIjn-Otn7ElBw6SVl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLInCyhHTE4I0iUk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHuVwWUUNgaYvCq4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyxEgqm6kf7qGXkZwd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylR4APAmVRdZfoC-B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0Ed_QouUEZoM4S094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_sLguj2tIr8rjd1J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]