Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
would you agree with me or disagree on the following statement? Dwarkesh defends the AI risk argument better than Yudkowsky
youtube AI Governance 2023-08-18T01:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxClpKCOca9l72V6wN4AaABAg.9pVxfb4Xki_9pX2CRyoLzz","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxpmtj0TS1YrNnBVzp4AaABAg.9pVxZtplBtm9pW8vSBIlj-","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzU4TtPOomA81vHOlZ4AaABAg.9pVwdDd9EtM9pXYDO0bwkj","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugxhapbt_9wMHv4IR0t4AaABAg.9t_q1vcLTxO9tdVLdtki49","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tTfUBI5b7g","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tXrFBpA8WG","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tYZwygn3Qp","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxakbXidzyURU6t2Gh4AaABAg.AU4cyREleFbAU5KLeLQhZ8","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwAqLk9Mv-hiFAAxGx4AaABAg.AU2qxnHl8g4AU39zLZ3EnC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgwAqLk9Mv-hiFAAxGx4AaABAg.AU2qxnHl8g4AU5_LMJLQza","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]