Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Observation: at timestamp 15:58 Alex told the A.I. to ONLY answer using the words, “yes or no.” However at timestamp 16:11 the A.I. didn’t answer using “yes or no.” Earlier in the discussion Alex asked for the A.I. to only answer yes or no and the A.I. strictly answered as requested (or accordingly) UNTIL Alex “allowed” the A.I. to elaborate its response. So why the deviation in the A.I.’s manner of response? (Alex either didn’t notice or chose to ignore it). Regardless, this deviation is exactly why I have so many concerns about A.I. Something “caused” it to deviate from a direct request to answer only using “yes or no.” I won’t speculate on why it did that. I only want to bring attention to the fact.
youtube AI Moral Status 2024-08-18T11:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz_SStYVlymmkEEbzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy4n9hf8Fp4Xqnk9l94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwGVqv-mNCIOJ8JFAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxm6tJcGzFz8-idqTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxT2NVdPVfexU9sb8p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylTOe5exRUVS3eUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxsXv-lm4OQ9tS36lt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOsF90rQTMJfDFo2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6UVqN8PPJ4xcijYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwowH5KsMnmzeJnknZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"fear"})