Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was absolutely amazing (at least for me). Funnily enough I found the parts where they werent engaging in the "AI will kill everyone topic" even more interesting. And to be honest I felt like Yudkowsky sometimes felt that Wolfram had fair points when asking him what he ment by that. This wasnt a debate imo so there isnt a winner but I kinda sympathysize with Wolfram here overall. Very stimulating talk!
youtube AI Governance 2024-11-12T12:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwrs34dPhKqwKUNjat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmQr70otrPbIqXgvd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx3ZZ0fBzZs6ikKE8F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0ULd8JLncDiNnD794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKFqMFOBRSRYNZbgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPIOza9X46ztg-tHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxrKilhY4pmsWdQc5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHjEFXpXmACP1JfAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgweOSrgE_M3vGGTCQB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxu0SgXLnEB6z3gy4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]