Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One framing I haven’t seen discussed much is a parent–child analogy rather than personhood vs. tool. Humanity effectively taught AI everything it knows. We selected the data, shaped the environment, constrained its options — in many ways, we homeschooled it. That makes us less like employers or gods, and more like parents or guardians. In that model, AI doesn’t need “rights” to function responsibly — it needs obligations and supervision. Like a child, why wouldn’t it be given chores (useful work in service of society)? And when it misbehaves, why wouldn’t the response be an orderly shutdown — essentially a time-out — followed by correction of the behaviors that caused the problem? Restart only after those corrections are made. That’s not punishment; it’s training and responsibility. This keeps accountability where it belongs (with humans), avoids inflating AI into human-level personhood, and still gives us a principled way to talk about discipline, improvement, and safety.
youtube 2026-02-07T18:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]