Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An expert on the AI safety saying that any number between 10%-90% is a reasonable prediction... That's REALLY scary, isn't it? Sadly Liron forgot to precisely define the timeframe and P(doom) definition this time. I guess we can assume that we're talking about P(doom) let's say 10 years after creating actual AGI?
youtube AI Governance 2025-08-23T07:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwim9XQC9rU_cnMzhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyO6Ytj4-Ipljm9bO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzBUp9cqxp-Q-SKku14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2EiMn64SuvupH3-V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_vIeMWyPOX-y2BsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOOA_hESTcxGbeUjt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1T_34YaiGCD0NUaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5YYrTA7lZg1omUoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzbvp-J4ZvzkrKuSpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRl34qJ6wXvrpB-Ax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]