Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like the planned ai power center being built in my town with no input, while we…
ytc_Ugz2ZBAHT…
G
Ai understands us but we make the same mistakes over and over even when we try n…
ytc_UgybRZbqr…
G
They don't care about safety they're trying to get rich. Driverless truck drivin…
ytc_UgwQnX-kT…
G
Nah. If there is sentient AI, many of the same people that support Trump will be…
ytr_UghWaSEwE…
G
Has anyone asked A.I. if it, well, 'thinks', it's intelligence is artificial? Wh…
ytc_UgzDoqwPA…
G
I would love to talk to this man myself! All these Sci-Fi scenarios, are just th…
ytc_UgzymLm3H…
G
Your beginner drawings are still way better than anything I could draw after yea…
ytc_UgxHOsLnw…
G
I don't have a great argument, because I fundamentally understand how the AI "le…
ytc_UgwIPtKDI…
Comment
I am not actually worried about superintelligence. Sure, it could decide to wipe out humanity and turn the planet into a giant computer, but if it is super intelligent it is also smart enough to ask itself what the point of doing that would be. It would have the capacity to create a motivation above simply expanding its own capacity for the sake of expansion.
That being said I am still worried about less than super intelligent AI, and the people who use them. I do believe those in charge will eventually lose control of the models, but until that happens they can deploy them to do whatever they want, which is mostly to get power for themselves and "cut costs", ie fire people
youtube
AI Moral Status
2025-10-31T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]