Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This conversation rarely ever gets to the actual point of the entire discussion, AI risk, and it’s clear that’s not yudkowskys fault, yud even lightheartedly says to wolfram that he’s “run out of his rabbit hole quota” recognizing himself that this conversation keeps veering off subject. Does wolfram even want to discuss AI?! Or just have a directionless discussion about definitions and philosophy
youtube AI Governance 2025-01-09T16:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxtgoHjhkpFEjQD51F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznnaeECEmpOUOQ0UV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy5uoTRQtcmtnoF81p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzC1s9GCIb9td0Thcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzClmfl3tfSoFR9BvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwA_ncTXpBp4zIgUAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAR1OiNbuwL_gsxSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"}, {"id":"ytc_UgwEl5WDy6kAfpF0NFZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxwee-n-EOyvXX7TIR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMYzRd9-d_dSNKUwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]