Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've held very similar views for almost a decade. I'm more pessimistic though. We're deluded if we think we can control something smarter and faster than us. We're deluded if we think safety will be built in through policy. The AI arms race will be driven by fear (of being out-paced) and greed. The 'rules' will be ignored. The ONLY hope we have is, as Geoffrey stated, that AI is taught 'from a young age' that humans should not be harmed.
youtube AI Governance 2025-06-21T21:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxf9IqS3bkx7BInt6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSwnKHpzGfFsXcoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyle81WqSsyT0BCn_Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCCi3Jr2n0tbVfxhB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOk29U43SHnz1zyLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzVYOlRpBW2xQJTaF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4m84nJ-jaQPVEm1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTAiPvtaFFj5-ny1d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysyVSll-auLoBMZDV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw4KUIlSAabt9nlg054AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]