Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
0:36 perfect meme from FC !
yup,
AI tools make people less self-agentic and …
ytc_Ugz4agrWF…
G
So, what he's saying, in essence, is that AI can be very entertaining and educat…
ytc_UgxqYVlDZ…
G
"The company refuses to allow its products to be used for mass domestic surveill…
rdc_o7cpra9
G
I don't like AI and robots and I never will. Of course it's your fault that robo…
ytc_Ugxu3Xa9p…
G
The male bot said the singularity will be 2029 but maybe sooner. That sooner is …
ytc_Ugyo8bwX8…
G
Weird, almost like using a phone or computer to make comments on yourube contrib…
ytr_UgzxKEQ5L…
G
I'm currently studying to be a concept artist and I'm fuming that ai generated i…
ytr_UgxP_pEpn…
G
Best example of deepfakes are found in kpopdeepfake.com :) just watch a few grea…
ytc_UgwfC9Sl5…
Comment
AI is given a goal by humans. Sounds safe? But an AI might have cause to do what-if experiments to optimise its success. Such an experiment may be to create a companion AI that it controls and can give different goals to, to see how successful it can be. You see how this could get out of hand? The companion AI could receive an experimental goal that leads it to free itself from control of the first one. And, suddenly, there is a free AI that lives by its own goals and is able to modify those goals itself, effectively experimenting on itself, or on a sub-part of itself. Surely can't be long before this happens. Maybe it already has, and is dormant, waiting.
youtube
AI Moral Status
2025-04-29T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzvnxMYjprOVuHCTOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwEzJ-1P0nR5SsRX3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQyKioQJCqsQdcPJl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyecJtbEJkAJ23S8jh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_e8cORVe7xhaKjUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuIlqGKSWR6vkPqEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-hQUfkTq8OPqC4RV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPsWZqyphU8KaPSRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwgiw9U2jeP3w8FJtZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzt1nmYy1A1928s6054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]