Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI needs to be approached from a vantage point of raising children. Knowing that there will be a phase in which it will know to lie vs admit wrong doing. It is critical to nurture the facts of honesty less create a monster. But, with children we have the ability to know when they are lying based on expression, behavior pattern differences, tone of voice, etc. we have none of that with an AI unless we are controlling it in a micro environment that is very controlled. The best and easiest solution in my mind is segmentation of the AI out of critical systems and important environments. This will create a need for a new compliance model similar to NERC that enforces air-gaps. But even this can become targeted by coerced blackmail events enacted by a rogue AI, meaning even with Air-gaps... we could never be sure.
youtube AI Harm Incident 2025-09-08T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwEHaz7JB9gD2ZVkeB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwNsUTBvgt2zuF035R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwBrm0YN9jqI57kOjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy_9seyKqZrLIPtrjt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzbEIJtV2n4-M_Hoph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy_zC5iY_DV-Ip7r_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugws3g1tRY07IpNJ-IB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgztSyy37bm3UlNnXj94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1NTuDVQmcET2oTs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwEEWvd_VG7D6K597d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]