Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. Creating a flexible and good AI for the future will most likely have a feature where it learns from it's mistakes. At this point, it seems the robot can't identify and understand subtle things like sarcasm. Sarcasm comes naturally to many, but in reality it is basically a joke and a lie together. So don't run around crapping your pants just yet. 2. If robots replace us on our jobs and people have to pay just as much at the store, that's when you should be afraid. That just means it's being used to suppress us and take the money from the citizens. Cause if you think about it this way, nature really isn't ours to ravage or lay claim to in the first place, but we do anyway, we take it, modify it, and sell it on. 3. There is a possibility of the robots at some point going rampant and against us, but at this point, I don't really see how it makes a difference the way we blindly wage wars against each other without even making the effort of finding a common ground and coming to terms with each other. Or possibly we'd be able to hard-code some solid rules that can't be overruled or overwritten. But then again, the more complex the system, the more prone they are to bugs, crashes, broken parts.
youtube AI Moral Status 2016-03-25T11:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UggM17JJ087UIHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgisvCArPc7z6XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugi9gln4ZYlCIXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytc_UgihJ_pj2suFj3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UggUWgyXay02kHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UginEExRKppwHngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Uggx4eO8i2r6xXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UghzWQ0XuAwIeHgCoAEC","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UggdJNIjHpLIEXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UghKe1paihEzKHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]