Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It would be smarter, instead of trying to achieve singularity like it's some Apex of importance, assuming your use of AI and vision for AI in the future is for the betterment as a tool for mankind, it would be better to ensure that you maintain a strict line between robot and singularity that doesn't get crossed so that the tool stays as a tool not as a imitation of humans. I know these jackass atheist scientists have already crossed that line but that's what a smarter person would have done. All these scientists working day and night time, to solve problems that they create. Smh. Next year we will be paying carbon credits well we mow the lawn and watch rockets go to outer space for no reason. I guess artificial intelligence is a step up from no intelligence huh?
youtube AI Moral Status 2022-12-25T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwbF3SZDPPy7mC8Xt54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxItMLnOqf3B_0I8EB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgynYoq9AnxoOIgXm-l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8OOzeOeyRDvV0aTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxev-pyKYsQlaoDCiJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy8NKjVd-6ZbCnzhvp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx0yH380Ar9FamrCV94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxAzX9_Uv__DYnj0D54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyq--qaM5NRN-uBodx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztpdMv0Wl6LoSW--B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]