Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No it's not. This is the worst possible scenario. Which is unlikely. Assuming al…
ytr_UgzQfmMzn…
G
Rod Serling of the Twilight Zone was prescient. Regarding the onslaught of AI, …
ytc_UgzqG2Iis…
G
Your term of AI vs human doesn't qualify because behind the information your hid…
ytc_Ugy16qffA…
G
I actually expected them to see the second questions in terms of "one person die…
ytc_UgxX4Xgbv…
G
@jimziemer474 my only thought to that is that I presuming (probably naively) tha…
ytr_Ugz9rdXWW…
G
Why does the semi need a sleeper cabin if there is no human driver?!?
AI is on …
ytc_UgxB0fbhv…
G
AI can create appealing art, but can it create thoughtful art? I've come to real…
ytc_UgyH8RCYh…
G
AI Stocks are doing really bad so it is not inevitable. Also your car vs horse c…
ytr_Ugz9_gRMW…
Comment
Current AI development is lazy design. And AI designers know it and just keep making it more powerful, and hoping their sandboxing is enough to contain the rest. They know safe AI and overall AGI systems take hard work, and they just don't want to put in that hard work.
Think of how the average user just wants magic AI results from a simple prompt, that's too how AI designers are going at it, they want to endow a results machine with as much power to get desired results, and understanding how those results are being made is getting lost in the process.
youtube
AI Moral Status
2025-12-16T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy9jqVwW4Cv1mhTAQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyS1m40RcF3lRlHrs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFJx4OyWqyaG3RJIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYRhA5O1iL4_ME1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdT05ulYf0LhFfF0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQX4aEUwKvWlPuGMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1zY5xLTtF6QKOH-94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzahSgXScP44i6_Ygl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw4FBoXameA8-30eR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4JmFPwXWOclPVeCB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]