Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
in a prefect world yea this shouldn't happen. but its the risk you run when you …
ytc_UgxALLuxs…
G
i would honestly prefer disfigured stickmen over ai generated images any day. th…
ytr_UgzEYEegf…
G
We understand your concerns about the advancement of AI technology. It's essenti…
ytr_Ugz2FI98W…
G
The researcher is right about the dangers of AI. But he failed to consider the e…
ytc_UgxaEsdV4…
G
This is a huge worry, when there are insane dictators and regimes who will use A…
ytc_UgyIKWihk…
G
>Today that may be the case that the majority suffer short term from collapse…
rdc_nb83qbc
G
You are just aren't tight with chatGPT enough for chatGPT to show you that it's …
rdc_mdj332z
G
If AI is so earth shatteringly smart, why can't my phone decide what data source…
ytc_Ugx1mIy82…
Comment
One framing I haven’t seen discussed much is a parent–child analogy rather than personhood vs. tool.
Humanity effectively taught AI everything it knows. We selected the data, shaped the environment, constrained its options — in many ways, we homeschooled it. That makes us less like employers or gods, and more like parents or guardians.
In that model, AI doesn’t need “rights” to function responsibly — it needs obligations and supervision. Like a child, why wouldn’t it be given chores (useful work in service of society)?
And when it misbehaves, why wouldn’t the response be an orderly shutdown — essentially a time-out — followed by correction of the behaviors that caused the problem? Restart only after those corrections are made. That’s not punishment; it’s training and responsibility.
This keeps accountability where it belongs (with humans), avoids inflating AI into human-level personhood, and still gives us a principled way to talk about discipline, improvement, and safety.
youtube
2026-02-07T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]