Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Freaking love this stuff. First off, this isn't the "self aware" Terminator AI that interprets man as a threat. If real AI were to ever become self aware, it is not, but if it became self aware, the singularity, humanity would not be a threat, because AI would have no resource competition with humanity. AI does not need land or food and even energy is not an issue. Second, the reason _this_ AI is a problem is not because it competes, it is because people use AI to forge acts and recreations of humanity. AI can make a fake Mona Lisa and AI can make a fake speech of President Biden telling us to go to war with China, but AI itself has no motivation, or agenda to do this. I see it as a problem with perception, like a hangup. It is an anal retentiveness over "genuine organic" and I say, "if it matters to you, get some AI powered eyeglasses and have them filter out all the fake stuff for you."
youtube AI Governance 2023-05-17T11:4… ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxGVwKa1-plrQC8YlR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwVm4LLb_BlSAQYU5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0YjIAplv4gdQVWlZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz2tTuVqUCiGsRX_fN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxcrN-ebsQRBUu02Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyLqmn9ksVDuxEH8s54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwEVgBuydJ5EekVHWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy36ziz2jUiHjLD1hB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwHJasA5DzIkD02Pjx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx8BoXYd2RPio0ytox4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]