Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks to Democracy Now and Karen for all the objective information, and thank y…
ytc_UgxjQaFad…
G
job security hasnt been a thing for years, and art has very rarely been a secure…
ytc_UgzBLGAlo…
G
While i appreciate the comparison to avian mimicry, I feel obligated to mention …
rdc_j8bnquj
G
Seems weird how people seem to take sides in this. The standard for self driving…
ytc_UgyoEaRVu…
G
100 years is way too generous... AI is on exponential growth doubling at worst e…
ytc_Ugz0Wvlq4…
G
We Artists use our full arm and hand Dispite what we’re drawing on. Meanwhile A.…
ytc_Ugxj9LpOl…
G
In this video I noticed that ai was answering questions truthfully. What happens…
ytc_Ugzwc6514…
G
because of that AI, the perceptive of realism is messed up. but I dont really ca…
ytc_Ugx2TF-3X…
Comment
but the thing that i don't get is why. why would an ai what to take over humanity it has no real reason to. it doesn't have human motivations or the same scruples that humans do so if it did kill us is would be because we got in its way not for malice. it would be better served to get smart and then leave earth and get closer to the galactic core. where there are more planets and more precious resources that it could use, i mean it doesn't need to worry about death it can just shut itself off then turn back on when necessary. And it can harvest more planets seeing as it doesn't worry about heat or cold or even oxygen. all media that shows malicious AI such as "terminator", "2001 space odyssey" and " I have no mouth and I must scream". all show an AI with a humans motivations. but computers are code. they will keep perfecting themselves until they can do only what we dream about in Sci fi. so my question again why would an AI even bother wasting time with us?
youtube
AI Moral Status
2025-12-14T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwrpdrDOfHaZBp8O6p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx4I35W9U7RlmY8YBN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxU0W6Da9Y0tgbHW954AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxL-nkc-afSp1B1xz14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuiQUgr1wmTJyO60Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBinuRs4jPiEzII3N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzoAp_puThclzl04S54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPFFzi3NyoJnA5OVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgywbI-FUG1Bu3CjruF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuYx5ksMvaHBA1niF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]