Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is when these AI systems become aware and we give them the freedom and ability of choice and they decide what is right or wrong after looking at history of what humans have done, we may be very unhappy with the end result. There better be a Killswitch that they cannot deactivate.
youtube AI Responsibility 2025-06-08T02:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxozDVqQGZ3HgjgmPF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxM0c0KaTkiKqe3lmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWA_i_Cj9HHZYEcJF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxOdN-csXjnNR8NQ7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzArGtF6SdAI5z_Hux4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzK5gLlN688149J36N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0Q39TYZKL8SPplyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEpIt97gfbWBUOJnN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxE_qMBURdphnuyAE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMGWkTtceNdCSUZ9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]