Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wow, so may unasked questions. Jake, you should have asked him what he thinks of the odds of AI ending us completely. The professor has raised the odds of complete human destruction further, twice in the past two years alone. He has estimated that this world has a 10-20% chance of complete human extinction sometime in the next 30 yrs.
youtube AI Governance 2025-12-30T19:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz0GlS4Q7WtBgtQB1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyE6Vw1wZ-bhXHySTB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwhC2wCikpCr7kyagF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxnzM811kNeoE8lBTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxofEpKMauCAtfdcuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxRxs7Yc3IXN-n4XVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzdb2o1n4Cb39R8Q8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgymqP0LmfV60CwJ2Td4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwRqSyWEBSVocGGKwN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz7mmBmUu48H8wyhb94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]