Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How can an AI train itself without access to grounded and high-quality experience? If they just train on what they already know you will get drift and hallucinations. The data they use must be human checked, have an environment where errors have consequences, be diverse in thought, and authentic. You don't get GAI by having it train on its own shadow.
youtube AI Jobs 2025-11-18T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxxvMxlCsTo0XA3OAp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGjxXpk4LQzjPmEad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzO6Bj3kQY2tZ3_und4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoJh7lNMbOM23szl54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdfHCsM7RvCI8KFU54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNyD1yOMotrlnQS3x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLu_CX6fK6gLR6zlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwpj2ZV5y6OYK8atql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyY9j5CHwSqAubst294AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyU2tABs6G8vkxXjfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]