Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@teeeeeelaTech company valuations are the only thing propping up the US stock m…
ytr_UgzGnq0me…
G
EVERY truck driver, including myself, knows that "trucking" was only viable as a…
ytc_Ugw9JaWBR…
G
Except, all that AI power needs a cord. Turn off the power and eventually all th…
ytc_UgwppH6F0…
G
@Matt-dn5jc What are you talking about? I just said you have no idea how AI art …
ytr_UgwVpnbOF…
G
@idkthehandleusernameas of right now I'm more on the side of using ai for art b…
ytr_Ugz6fA4nN…
G
It’s trying to measure how likely it is that you’ll commit a crime in capitalism…
ytr_UgzkmT4Ph…
G
A millisecond after AI becomes self aware it may perceive us as a threat we don’…
rdc_kqsv5u7
G
Oh god! This was garbage! But what would you expect from one of the worst podcas…
ytc_UgwusAk-q…
Comment
For the most part, this is sensationalism. Every case I've seen, the AI works and deceives to protect itself when given specific instructions that are not realistic. It's basically told "Do ANYTHING to complete this task... we are going to shut you off soon. And here's some blackmail information." By itself, it has no self-preservation instinct. It has no more fear of being turned off than a human has of going to sleep (most well-adjusted humans). Now, it's always possible that some idiot will do this with some AI and that particular unit will then use these prompts, but it would basically have to be malicious - I suppose it could stem from stupidity too. Hopefully, access to the computers with ASI (Artificial SUPER Intelligence) would be controlled.
youtube
AI Moral Status
2025-06-04T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzq_QvaR20wI87nri94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6wFzxQdOPl_pqj-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyy4fqHHWT06ixOaA54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1Y1eFuD7ijiOFflh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp5AO4-nxBLO5dUx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgywuWJDUxpIeatWPrh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwNGuVbRmbMCJSP1l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxWHkptbxayAKWWBb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz84OKg4euoBKaSAct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw0g4d_X7bccNEl0d54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]