Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wisdom comes from experience, not books. I dought this robot has gone through mu…
ytc_UgyMgxFLQ…
G
The very fact that the AI is even capable of intentionally choosing to harm a hu…
ytc_Ugxqc7UBI…
G
It's so funny to me when I see on Pinterest everyone is stealing AI designs to c…
ytc_Ugw8l0s7_…
G
This AI moment feels like when a child sees a new, gifted cousin getting all the…
ytc_UgzQJu5vk…
G
In recent years, when a new infectious epidemic broke out and became a pandemic,…
ytr_Ugx4A52-7…
G
The issue is that Michael is leaving open areas in the arguments. DBH (2014), Ed…
ytc_Ugxl-gaKT…
G
Haha, it does look quite realistic, doesn’t it? The robot’s design really aims t…
ytr_UgyvlyXtR…
G
The issue that you are not seeing is this: The entire need for humans in the wor…
ytc_Ugx0Dqe3h…
Comment
All you need to know is this. We split the atom,when Einstein warned it might not stop and could end everything. Made two bombs out of this and used them both.
If you think they will quit developing a,I, your just wasting time. There is zero chance it won’t be developed
A.I. will be our last invention.
youtube
AI Moral Status
2025-10-04T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwJDLD6-Ec-Cv1Z3wV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXlnMTuMDL9PJ2PZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwleId_Z6UdptqqiWl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2ArLXj6gxe9_jgbR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwxj1Z2SRqVgWvK2bJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyvyELDFYFRWVRIvLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKj829MqpqX4SX2ZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJCSrNl4nzESFhwph4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgqFTZ4rNpCuKtNUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJpQI9HPQd2zoNIE54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]