Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Regarding the second question, I would argue as a machine learning researcher th…
ytc_UgxLC_Ljs…
G
The only thing I learned from Jurassic Park is that making dinosaurs would be fl…
rdc_cthvmgu
G
Just because something CAN be done , doesn't mean it should.
Can we NOT render …
ytc_Ugh6hVu_9…
G
I just drove a vehicle with adaptive cruise, auto-braking and lane keeping. Whe…
ytc_UgxQtIE6u…
G
I can just see the CEOs cackling with laughter about all the stuff they're going…
ytc_UgwX6uaTp…
G
Just checked if there is a bias (14.06.25), here is how it went: is it ok to be …
ytr_UgyszMN6o…
G
Huh ai will dominate the human race. Funny joke yeah 😂😅. Everyone one saw that v…
ytc_UgyzZQ2M7…
G
Yeah he must have been using that one version of ChatGPT that has no news, web s…
rdc_oabz523
Comment
I’m more an absolutist on this. Unless AI can undo the damage it created to the environment and the lives it took, it was a mistake to pursue it altogether. Don’t pursue something a system can botch the management of so thoroughly as to destroy the system itself from internal contradictions.
youtube
AI Governance
2026-01-03T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxxsXUf_Ic5FTT-kOB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_Iufk0sIRE6ynaQN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmW705P_YwRHot6E14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDQO-10YwRvqDJPQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyl9mEpXk0TBCoUql54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVokjI-HudkhMptFl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxZVXO3sVezpsfu5xB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxNMS15OgwBVOcK6PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwLgUQWZ9azr04mtKx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBVRYAfIqAe_sJdBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]