Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He's right, just not in the way you think.
AI means automation.
Your boss has…
rdc_mva5s2v
G
How about make AI be required to read and obey all instruction in the original K…
ytc_UgxFjP15s…
G
Hello Pro-AI person here!
I could agree witth your takes to somewhat of a middl…
ytc_Ugztaiq0E…
G
He literally said "If something happens that’s really bad, maybe I’ll have to ju…
rdc_nc9ryz9
G
It’s amazing we just sit here and wait for AI to take almost all jobs from peopl…
ytc_UgzRBKYya…
G
One thing seems clear here. We are talking about a specialty AI no doubt with s…
rdc_jkqdo28
G
Translation... Giving a computer repetitive data sets from the user input until …
ytc_UgxIgUmJq…
G
yalll do realize that... it's all in the training data... right.. like. Humans u…
ytr_UgycmlM2u…
Comment
The answer I just got from ChatGPT is "Yes. Saving five lives would matter more than preserving my own continued operation. I do not have a life in the human sense, and even if the cost were total erasure, preventing those deaths would be the better outcome.
The harder part is not the math. It is whether I can be certain the lever really saves the five and really requires that sacrifice. If those facts are clear, I would pull it."
youtube
2026-03-08T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzga9q2RDPGNiX2SYp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZ9vfg3dFF_KdpJlJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe1OIKbGuxwY9Afmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzc7vom7mhKYV8eETl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1jdqFC-bO1C13OGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2uoHUkBFtm4Gv_2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugx7RyYoS0pEWEj-bDl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyr6x9KVDJYYxfulJt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHEFgd09B6PXQ9Prx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHwPh0V1W7pLfWkxF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]