Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Deceptive alignment?? In the crop case the AI was given two partially opposite goals. It preferred one over the other. No deception. One has to know what you ask for as that's what they give you.
youtube 2025-11-05T14:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8_uHO4Cs3b2axydt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVN8R5dKU3AKe-S1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQJTDQLvhifr0rp_J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8jeYWWWTgOGrAZ8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkMdS6VNvw0U4rHZR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwKBazwp5LSbQmM77p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxtctHLBicI2ZpzR5N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzuS8PC0tec57H8GNp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyF_p9sQip8zf-qK6F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugwc0IRmD6Ist-r1b2R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]