Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think Scott was exactly right when he said basically nobody is going to start pumping the brakes on AI until a major catastrophe occurs. A minor incident like a chernobyl or a titanic sinking will only be a speedbump, not a red light. Let's hope by that point it's not too late to stop. Like Eliezer said, we will reach a point where the AI won't let us stop.
youtube AI Governance 2025-01-15T18:4… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuKve0p8NacQZ3jl94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxliOdOHICKNldjUnV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRscm_LqcObDfLevF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw6hcLS-ig_2l0mKih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgNakNI2froR_VBJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvbW-3XGJXi6zQWkp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx82TcIE1NIjP5EFhd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzghbarzZgKxERfW_R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypFt-geNkrVXiCZKZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgIK4NlPPKCCuuXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]