Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Stephen Wolfram is a legend as always. But Eliezer Yudkowsky is quite annoying, tbh. Where Stephen wants to dive deep into ideas about what is consciousness and the relativism of human self-worth, Eliezer just keeps rambling about surface-level popscience points like "iS AI gonNa kiLL uS oR nOt?" Is AI gonna kills us? Yes, it will. Can we please move on.
youtube AI Governance 2024-11-13T11:1… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]