Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To me it was clear Ezra wasn't comprehending the mechanical nature of A.I, especially LLM's, and couldn't get away from his initial thought of coding safeguards in. Eliezer explained why this was futile more than once, Ezra seemed to think he was evading something when he wasn't, because he lacks understanding of regressive model training.
youtube AI Governance 2025-10-17T12:1… ♥ 11
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx4Hnkh8xD9SF74W-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyzzw73Hyi8hPakJp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMLKfL5W0viIGu9xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzK9OtyDIgvoXHpFuJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0hMXzNfinvXrNCMZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwuRAK86clvTotycaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySIJo2bTtKqmykzwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz4zfh4kEJyfwJQAcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGMvFT7Dhf22kjNg94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwvWlx3cblKTjD4z8h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]