Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We won't have self-driving cars for the same reason we don't have self-driving trains (which are even easier to automate). Liability. If a tool has the ability to kill someone, then you need to be able to hold a human accountable when it does.
youtube 2024-11-15T13:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxgowVY79x6s9wCRK54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugweit1p7Rda9pMQTk14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyyWoTrhg7WcoUKY54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBh9vUcQ4hO2nbjwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5I6hU8JZIQBNG3R94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9RV9SbsudMcEX70V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUIl4f2qfyy1BnLeN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1PKtF23fiJhjZwUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNPTw8mtaN48NkTgV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxB7L14PPYcECzt19p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]