Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@jonnyharris It's good you posted various sources, though sputniknews had an article about such in 2014-2015. I was overseas at the time also researching AI and so forth, and that's one of the sources I came across. Don'r forget though, there is always the "Power Off" "button", so yes, there is a way to stop AI from going "Doomsday", etc... As those at CISA and so forth state: Human-in-the-Loop.
youtube 2025-05-11T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyjsT3QahyrgJxrr3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx9_Ldg2xcX3GtPTj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGaIl7oSWy6kY3dLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzbDokD7A7PvTJddZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZVqFlfttoy2TteR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8P0G5C3UQP3veI-x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywhRKHxiM5huFHokx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzn4IPj9mCuPfQH9AF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlmXL0LcNWzuy49Nh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"skepticism"}, {"id":"ytc_UgwVaBX_KkauEjaJki94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]