Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If Roman thinks that there is a 99.99999% chance we're in a simulation, then I can't understand why he's so concerned about AI wiping out humanity, what exactly would be lost?
youtube 2024-07-25T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw_txO8Wfge0LCyZ914AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzvr4pBrI4CpXNENPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyBaHdEoEzHLhMjBRZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-QG4yrFRjpfMaXIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzyd2RBFPd-Zdd40HR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2qqEO021EydYvH4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGNlKwNyrFXMY7gYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzeYKsWAYwemgdhrax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxiF7yZzIRx3VF6klZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0JRcY3jzbyJRxH_V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"} ]