Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with AI systems like ChatGPT is that they generate output based on probabilities without consideration for the factual nature of anything it outputs... they will make up facts, citations, authors, book and study titles, and more, in generating their output. Until they can assess the factual nature and validity of the information they both train on and what they output, they will remain problematic.
youtube AI Responsibility 2023-06-15T05:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzx4cEe4MPFqVL-RiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyS-jfdVbvwDwOdkT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxoovRq0mkI9Nm4gKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDjUr7K8TDJIWF6dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIL2h1LmjkfsrEz4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzf7HKTEFRtkVWbM_d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyq1PlxEOaJOvbEk094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgykFmoD875y6fODkId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwsLdKCtakLYwobPfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwX5VJil90G7fAG5f54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]