Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's as if we've discovered this kind of species that appear to be useful to us and keep researching it on and on. We keep developing it under the illusion of it revolutionizing human life(sorta like electricity), helping us in day to day life, and being completely controllable. Even when it's got more power, higher intelligence and eventually can become completely independent of us. I really like the AI 2027 paper, everyone should read it. If I remember correctly, according to it, the AI/AGI/ASI/WhateverOneMightCallIt would eventually simplify its algorithms for higher efficiency. Therefore, perhaps, we should skip the AI and invest in the development of advanced algorithms for complex tasks, whilst actually maintaining full control? Why develop what could become its own species that surpass humans in every way? Illogical. This whole AI research thing seems very similar to mirror life research, except, there are people, organizations dumping tremendous amounts of money into its research, for some reason. Is the possible reward going to be worth it or something? Is it really that convenient and useful? Can we not solve those problems(as if those'd exist) without the use of 'AI'? Why not invest that stupid amount of money in education or something.
youtube AI Harm Incident 2025-09-11T18:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyN020Bt0jgC0_lCvB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0NkSIv3sLSE7HJ1Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxcBAUylbiTD2zNc0l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqqQoTQ43wGzYp_aV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw-BNPK3-nKFxU9S4J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3FF07tt-F2QB4Sb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMxeJMd1XYCho9sMF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAQL96-6WfMV72TrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWwzqwkXgdixU47Wp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyU-44Dau8T2Kedxd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]