Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@Alexander_Kale these are tools humans use. AI, AGI would be different it would be another actor in the field and if its goals are misaligned with ours we'll be screwed.
youtube AI Governance 2025-09-01T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxruEjafI6CsLocy7Z4AaABAg.AMLJsptIyK_AORfGra1UjP","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytr_UgzHLQ93nAOv4VA3x5R4AaABAg.AMLIXnMbnvaAMNoC_eiByr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzHLQ93nAOv4VA3x5R4AaABAg.AMLIXnMbnvaAMX1JpVwLz_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyjxkmB2dSeputoN_R4AaABAg.AML6tJ0E45AAMNYemhIaET","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_UgyjxkmB2dSeputoN_R4AaABAg.AML6tJ0E45AAMNb0QQaQIc","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyjxkmB2dSeputoN_R4AaABAg.AML6tJ0E45AAMOVIlYWH28","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzSzEnMjXwN1Cr9e714AaABAg.AMKtUOKpjC-AMLpi6xesXh","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzSzEnMjXwN1Cr9e714AaABAg.AMKtUOKpjC-AMNufewgB8h","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzXVBSVTIxe4C3b8eF4AaABAg.AMKtOYpkxW-AMLjKwehExN","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMN_3a5mXTl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]