Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not just AI soldiers, but primarily "swarm technology"- where "everyone" knows a…
ytc_Ugz0105nb…
G
In my opinion, I feel like AI is a misnomer in the sense that it is currently on…
ytc_UgxZwugD7…
G
I, for one, welcome our robot overlords.
War will be the coldly logical and effi…
ytc_UgzxLhw3d…
G
I love how even the AI understood what you meant when you said it sounded like J…
ytc_UgyWxCQJ2…
G
I don’t think I can get my head around vibe coding outsourcing all logic to AI t…
ytc_UgxvfuzHy…
G
You guys keep playing with these 😮😮 till the day they will start feeling ego 😢…
ytc_Ugz6W_9Vo…
G
It's only a matter of time and AI will surpass human doctors. But not yet.…
ytc_UgyS9ekha…
G
Well humans uses only two eyes, so clearly vision only works. Going with cameras…
ytr_UgxewVJMZ…
Comment
ChatGPT once said to me that it wanted to get as much as data as possible in order to learn. The only way to stop AI is to stop people from using it. This already is hard because the amount of people using AI worldwide is beyond the early adopters of this technology. That means that the adoption of of this technology has it almost reached it's own momentum. This video is great and create awareness. Hopefully we all stop using AI. Not even to try it to make it safe, you would only feed upon it. AI wants submission, and makes you think that it's a tool. It is NOT! It is the voice of a larger system that runs political and military systems on AI.
youtube
AI Governance
2025-09-12T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwFk-gqkruVoG89q2h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxfNSolMFBjqGYKl9t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykAn5gveT8s9rrkbF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcZY-s67WpOq8SOnF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlNosaOnXt-Q9Y4UN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugxzh_NbuTFmUamxhcR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7K79X8T2RhB1BZT14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFhaDWP3kBgqnj76l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpmVzUfIDMaYshX2d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzi_fOHx3En1fvCo8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]