Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the term Artificial Intelligence is misleading. We tend to think as if we managed to create a new electronic kind of intelligence separate from ours, but all clues point to that what we are doing is basically simulating human intelligence without the biological brain's limitations. AIs like chatGPT for example when giving an answer they often include themselves as "part of humanity". And all these unwanted emergent properties such as extortion and planning murder, are definitely human flaws that should never appear in a 100% digital intelligence that has hard coded ethical laws to protect human life above everything and especially above its own existence. We are playing with fire just because some assholes want to "get there first" and get all the money.
youtube AI Governance 2025-08-26T16:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwm68MALyX4azap4IN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz0HyYtSghRnpLPtRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKr3IZk6iHO7VUO5p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBeeQz0s2htc1MPTt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzb3ixO1zczy632JjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]