Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its because people think LLMs are like proto-Data from Star Trek. That it is capable of decision making. Its not. Its just a sophisticated text based probability machine. For instance if two people are talking and one says "how are..." the next word is highly likely to be "you" and not "pants". Explode that out to an entire lexicon and you get an LLM.
youtube AI Responsibility 2025-10-09T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxvD2Bp1k3P3yFJ3mx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwb8mWmw8JRqyrDpBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyWTGWqymNewPHudqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxh9YcgLgkqybF5f9J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPF_D6ZeNAmVT14i94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxQvvtMeTt4O2LqXrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIcwAjYAmCDZzWLr14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZwaEJRngSBxYgTgF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhCZ-k-N8lUo4thTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvtsKcTGPhY7XxTdN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]