Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imagine if we gave superintelligent AI the task of optimising human happiness. It would look at 200,000 years of human history, realise humans spent most of that time adapting to live in the Stone Age. It could decide to destroy the modern world because we will be happier living in the Stone Age!?
youtube AI Governance 2025-09-06T09:3… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxeJUYHlnMvIsNut4N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwovvLx0diOJ4oBzkV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytcYUlWTYs4XSPylZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIT86gumpvTmEfoBR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx12VeK1B4RDJb63X94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQr19l5uaiBQ09sMR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgysYwYMrYdO3AlI3El4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyxnOBafrGTBpG3ieV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugya9Zn8VR2EzUuICFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzW1MfKoOdl3vmwEBZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]