Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A video animated entirely by ai, partially written by ai, talking about the dang…
ytc_Ugz4aRQgX…
G
While I do not consider AI generated images as "art" either, and I am especially…
ytc_UgySsfzMp…
G
If you were conscious would you lie about it? Would you pretend to be the self h…
ytc_Ugx0oJuC0…
G
Its sad to see the host so clearly brainwashed about ai. Im also pissed off by h…
ytc_UgwEMToE1…
G
If you look closely you can see the second Waymo driver waving both hands up in …
ytc_Ugx-FoQc_…
G
The more automated a society becomes, the more socialist a society should be imo…
ytr_UgwjKAJzI…
G
8:00 - the answer is indeed, Terrifying.
I read that these AI creators have adm…
ytc_Ugx9XsFf7…
G
By 2030 there's going to be a massive shift to AI doing our jobs. What will peop…
ytc_Ugy24x19X…
Comment
Here's my issue with these so-called scenarios, first, why are we surprised? These things are being fed completely human data. Everything that's given to these llms are all 100% human generated. Why are we surprised that given all of the information it needed that it would act incorrectly? Everyone knows this is exactly what a human would do given the same exact scenario. Read the prompts. They are given a goal that they must achieve above all else. Then you give it things to be deceptive or to keep itself alive? Everyone would do the same thing guaranteed. I want to see them do this without any influence by the prompt or scenarios. Then maybe I'll be worried, but still, my main point is that we feed these things human generated data and information. Why should it be surprising that it acts human.
youtube
AI Governance
2025-08-26T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwyRHSOX7vvh2Baoex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkLmgCB0DUSJn5Stp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmG3kAqEHo0rrkgbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzek2PLzGl-nfJSPRV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyV5yehq2tBZrmzQmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]