Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good grief! What a dim-witted thing it is for human to create AI which, in turn, will make humans obsolete. My question is: Can AI exist without humans? Doesn't AI require maintenance or can it run indefinitely on its own? If not, why obliterate humans? Why make them obsolete? If humans need XYZ (i.e.: Maslow's hierarchy) to thrive, then why not add that to the AI programming? Create programs where AI generates standards of living which generate maximum human potential. Include parameters like purposefulness, social connection, whole food production, clean water and air, health care, quality of life, and maybe even eliminate the use of money. That would be a great program for AI to run. The current system that revolves around the use of money has people acting from greed, addicted and killing themselves just trying to stay alive. Use AI as a tool to set parameters for people who so desperately need guidance. It's just a thought. We could always just keep doing what we're doing. Whatever. If it's all a simulation, nothing really matters anyway.
youtube AI Governance 2025-09-06T00:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxxpgM9RwZcQEqVRXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuqSfL_ycHh22UiJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzcug56sHbsnl3vp114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwA_r9_EywaKPzgVa94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwnOkPQigKYQL-en794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxU1RF0NjpvEFlTjbB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzjUWQRkDjGB5MDRy14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzHMHCgFSL1cKEM-s14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz3xUY9iSKt1TbTk6t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzHklbmU2IFkqeUGl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]