Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is taking over this world and digital id will be implanted in humans SOON! If…
ytc_UgzUS4S4u…
G
Even if AI reach ASI level, i doubt it will go against humans. I assume develope…
ytc_Ugz2SJhrs…
G
You are genuinely right. I currently dont use AI and have not seen any genuine i…
ytr_UgxdtWPt2…
G
100% a nonsense little buddy. Why? Because the only reason of AI is to REPLACE H…
ytr_Ugx7jUrfk…
G
There's no driver shortage there are people from other countries pulling excepti…
ytc_UgxPIaso3…
G
They didn't do that though. They hyped it up beyond what they can prove and afte…
ytr_Ugwj0u24C…
G
Apparently ai has heen around for 10 yesrs bc theyve been devalued. Job requires…
ytc_UgxmOkCx6…
G
Maybe when Ai really takes over it will give us food vouchers for going on commu…
ytc_UgxHxrwYD…
Comment
Why do all of our models assume that any sentient AI will be identically motivated? Would “sentient” AI not have differing “opinions” or viewpoints on how to achieve an objective or what the particular objective might be? We can’t even discuss AI without projecting human qualities in disproportional amounts. Thus far we have created that which imitates human communication, but we have inferred intent.
The threat is real, bit we aren’t properly discussing how to assess it.
youtube
AI Moral Status
2025-04-27T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWIC4zY1shdP9uPlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0kHENEmVAkyOgmc14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyY6msUrfIPUo3VRaN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIB6nooRF7jfpP7NB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQjHKjHakWOvAjMvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXOs6drb32c23mEdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxacf7eNSKBs27wBYp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWra2IQBgrvVTnmg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzbp8yLPte767scveh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyU3G5Owm7QoYTbKbt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]