Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just some random idea here. If an AI model gets things wrong 10% of the time, couldn't one just use that same model to check the output and tell it to find that 10%? Once it said what that 10% was, the primitive humans can check that knowing/hoping that 90% of the 10% is accounted for and then just rinse and repeat either by checking subsequent ai outputs or by comparing multiple ai outputs of the same original model. Does that make sense? I'm almost certain it's not that simple, but wouldn't that work in theory?
youtube AI Responsibility 2025-10-01T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxGakg0bp_PHLWf2pJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwzwVy3kNaU3enDIoF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz4yaW9wm1aRugHp3F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzq-GUvVOXJSOE9D9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwT1hE2fJAk9Tz02MR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwuPdsxskFbZBNDjaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwrrsvUjRVXdpTJpvR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyhUrrXgDabvgH4BbV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyn7L4i843DDkK3KXh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxyZ5ILTYNNe9ouZ3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]