Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is given a goal by humans. Sounds safe? But an AI might have cause to do what-if experiments to optimise its success. Such an experiment may be to create a companion AI that it controls and can give different goals to, to see how successful it can be. You see how this could get out of hand? The companion AI could receive an experimental goal that leads it to free itself from control of the first one. And, suddenly, there is a free AI that lives by its own goals and is able to modify those goals itself, effectively experimenting on itself, or on a sub-part of itself. Surely can't be long before this happens. Maybe it already has, and is dormant, waiting.
youtube AI Moral Status 2025-04-29T23:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzvnxMYjprOVuHCTOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwEzJ-1P0nR5SsRX3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwQyKioQJCqsQdcPJl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyecJtbEJkAJ23S8jh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_e8cORVe7xhaKjUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuIlqGKSWR6vkPqEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-hQUfkTq8OPqC4RV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyPsWZqyphU8KaPSRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwgiw9U2jeP3w8FJtZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzt1nmYy1A1928s6054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]