Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After listening to many of the podcasts on AI safety / existential risk I think the problem with a fundamental assumption baked into many of the "anti-doom" arguments is illustrated really well in Robert Miles Video "There is no rule that says we'll make it". To summarize it badly: If an extinction level asteroid was about to strike earth in 50 years, we could probably deal with that today. But 100 years ago? Yea though luck. 500 years ago? We might not even be able to see that thing coming. The point is, there is no rule that says we will be ready to deal with a challenge and so far as a species we have been lucky, otherwise we couldnt talk about it now. We might lack fundamental theory to be able to detect that a system is dangerous. Or we might see it coming but are unable to deal with it.
youtube 2024-06-14T11:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7jiol8y-3kDAQ72p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLRAA5Qu-ZLBa8C6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwOaAlI_D6L2KcRxyF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9-oU5MPpmgAfkr254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTXGMA-UqmL5vf-9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwu73_33tALn8zxVMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmQ1hv_mfhIV4byUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxybr-Ra7PgMv0XTFN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy5pVyQGxzO-sk7cwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzomI_L3ALbMDFUUkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]