Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
when "will be" turns into "is" then it's possible to believe in something. until then "will be" will always mean "might not be" so you shouldn't trust it. the problem I see with AI is that it means people don't need to learn as much, if it ever does get good enough to replace people, the knowledge to do the things it replaces people with will get lost, and then, some time in the future, when people need those skills again, nobody has them.
youtube AI Jobs 2026-01-20T04:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw2p3waOkxzbbEBGbV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyWIJlwvJvTRsK427x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxEEEcyJIkyc0fYAXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgypSkXh0hlK0cz5Hed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyMteDSSvdZ1Zpnux4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJT-HnjLFlikLy7ER4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxrdQlvY6x7a1WOXwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxF9-WyEqpTYHqyPQZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9DodY5vrlzvQcx3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwlM2qcrEzxrFdVPqB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]