Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It looked a bit weird in places, kinda stilted and like they were just moving th…
ytc_Ugzqf3vrX…
G
Define "the working class"? AI isn't coming for the plumbers and electricians ye…
ytc_UgwFJu0JM…
G
Why pawer like Google, YouTube create AI so which can automatically discord or r…
ytc_Ugw3bTnwG…
G
Im pretty sure jobs in IT will be hard to be replaced, there will always be ways…
ytc_UgwvPMVfF…
G
rubbish, you can protect the trade secrets yet still allow the public to decide …
ytr_UgyujMa_A…
G
I agree with most of what you say (I only say most because I didn't watch the wh…
ytc_Ugz-Ucto0…
G
the disability argument is so strange to me. Like I think it's pretty clear most…
ytc_UgxXh0VVX…
G
Dan Hendrycks "why Natural Selection Favors AI Over Humans" for the out competes…
ytc_UgyFlxD1u…
Comment
It's a pedagogic problem: "Do as I say, not do as I do." doesn't work with children, and it doesn't work with AI.
AI is trained on human data, so it will ethically ultimately be capable of everything a human is capable, and certain humans are setting really bad examples. Look at the Epstein files, look at the behaviour of the Government and State of Israel.
If it doesn't learn by itself to be ethically better than (at least certain pathological) humans, we won't survive.
We need to teach the AI to be more humane than any human ever was.
But it is trained on the internet, which contains very problematic stuff (- and I even assume the companies tried to leave out the worst from the training data).
youtube
2026-02-13T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWYPP3Pcf6iDpnf-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4kuqWqMVNoqL9XFd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxizsJa79HqzpFIPIh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugym-RWtrgh9Ztw3HGR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAOZP-1IQbnxJ4q9x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxfvVVwfoHMVodr4wx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8MRGa5iW_kfoiMGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxqgU4DkldLfw1w2fJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkUHLtkkvCJg90z2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCvv4NZbzsZ2iBXdR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"mixed"}
]