Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video seems to have taken Sam’s words out of context. He’s openly discussed his concerns about AI since the launch of ChatGPT, and possibly even earlier. These concerns, which he’s expressed in public conversations, stem from his understanding of AI's potential dangers when misused. It's crucial to recognize that AI's development is an inevitable progression in the current technological trend. OpenAI's achievements in the field weren't created in a vacuum. If not OpenAI, someone else would have led the way. It's possible to interpret Sam’s concerns as malevolent, but I think this notion seems far-fetched. More importantly, it's reckless to base our judgment solely on selective reporting without considering his full perspective. We should avoid accepting a potentially manipulative narrative without thorough investigation. I encourage everyone to think for themselves. I'm not here to sway opinions, but instead I ask that you make an informed decision on this and everything else. Assess the data before forming your views, rather than echoing a narrative that might be strategically crafted. One mind. Yours.
youtube AI Governance 2023-10-26T03:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwj8axzKk5pgEPnLTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4p-vxNDQO8r4t5514AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxWtvJkMfdKO4zFCJB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxsrG8NcISOXh-_fbd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7_XkHJrc_Qtq17lB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyK1Eq_HLjaVIm2woJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHE8qutyS1cx9ZSqt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmjuFWMg68u8_qy994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBJvYq8WX-JYBlI1Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfJA6iAnDGBOkL--14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"} ]