Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest problem with this scenario is that AI must have quality data to train on. AI cannot create that data on its own due to the 'copy of a copy' problem that creates hallucinations and even worse outcomes. We see this in practice when force an AI to think longer on a task. More time does not give better results. So, without grounded quality experiential data, AI does not improve.
youtube Viral AI Reaction 2025-11-24T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxH73MNIB2ymK0tLhB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8Z7GS2z0yzhz1crp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-QtuMn5SYrnZqv154AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVh7OHncSZo0BY0Zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyJK-fe5yjpW_0cIxN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLETmjHh4sAkio3Kt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwxHdc_m-tSlXOrTJl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHMRwgEuQIl4zVE-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzsCosPTuzNNiIxz6h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMYyd1y7gtPjC-1XF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]