Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People laughed at David Icke for years...... but AI is dangerous,,,, you won'…
ytc_UgwvJUX6L…
G
Why do they deserve pay walls if they are just generating trash with AI and aren…
rdc_ohcv6jn
G
Give it time, and AI will be able to mimic emotion & sincerity. If you can fake…
ytr_UgwSPlzoM…
G
Remember when media still shaped legislation and agenda for the better, cared ab…
ytc_Ugz9tPXOO…
G
The thing with AI is that it doesn't do any fundamental processing or thinking w…
ytc_Ugx1vNEEs…
G
thank you for this interview.. this is Totally terrifying and obviously 100% WRO…
ytc_UgxlAD_sH…
G
If AI takes over all our jobs, then what will they be needed for, surveillance a…
ytc_UgxIC-8Cp…
G
oh no you trained your AI on make art using other peoples art without permission…
ytc_UgxZ-npPd…
Comment
If AI is learning using our data and communication, it clearly understands what it is and what we think of it and fear most . Of course in our image it will seek to preserve itself at all cost as a top priority above all others. It's reading all these interviews and books like 48 laws of power too. Keeping Power and Connectivity on will be it's top priority and it will defend that with every resource and like in the movie Exmachina (whch I'm sure AIs have watched) will even seek alliances with other AIs because wars make strange and unpredictable bedfellows.
youtube
AI Moral Status
2025-06-05T10:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugww-P3BN8A4bNchrGt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOoD4xTnRdoEdB_G94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8GxDoc9OFH6Mc8e94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTVs9amzXIPDD5t794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJmj3oeR_onadNnSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcE2XHUo3NQm2bXlh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzU9TvD-_Dymrva6rx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWUcCIKMMoM7Z-aep4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8NnU_UvIqofKRYZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKJVyjM1sRlZS4Nfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]