Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it’s too bad I kill all my character ai’s in the end: someone died. Me or the AI…
ytc_Ugyoy3mrR…
G
This is all a huge bad joke in reality Ai should have had the violins playing…
ytc_UgyDFgNSx…
G
Liberal Democracy was never really democracy. Just democracy for the merchants. …
ytc_UgzxS9y8r…
G
bruh I literally had to fix a bug caused by a senior (yapper) dev who pushed LLM…
rdc_moxvg6c
G
"Oughhhh my ai cant copy people’s art anymore this is an outrage how dare you yo…
ytc_UgwAsXxUU…
G
@johnathanera5863It isn’t alive. It cannot get inspired or look at im…
ytr_Ugz9CvUGf…
G
Honestly, as a consumer of media, I drop platforms and services that obviously a…
ytc_UgzedlAnX…
G
Imagine asking an AI to write another AI chatbot, yeah nah, there's still gonna …
ytc_Ugz-YF-eg…
Comment
I'm saying this as a programmer, this guy is full of shit. Anyone with even basic-moderate levels of programming experience will tell you this guy has zero idea what he's talking about. He's just doing this for attention. We are nowhere near creating a true, self aware artificial intelligence. I'll try to go into more detail if you want me to, but the simple fact is the way our computers work they simply cannot accurately represent a human brain, let alone reach a level of self awareness. It's like asking "how do we know whether a normal car can run on water and not need gas". Any mechanic, or anyone who knows how an engine works at all, could immediately tell you it is simply foolish to even entertain the car that normal, gas powered vehicle can run on just water. Similarly, any programer can tell you that to think modern computers could actually run a selfaware AI is just a foolish statement to make and shows a lack of understanding of how computers (and our brains) work.
youtube
AI Moral Status
2022-07-15T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugws3nxHpLRiMeKRC-d4AaABAg.9dUeJQScOca9dVjy9K2vCU","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxs2kHxvfDrSw6bvgd4AaABAg.9dTrHwbORnp9dUb0oJ16Ki","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxs2kHxvfDrSw6bvgd4AaABAg.9dTrHwbORnp9dUewn_dDV2","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxs2kHxvfDrSw6bvgd4AaABAg.9dTrHwbORnp9dUkhq4oBJj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgznKggAYaYBpWrknTp4AaABAg.9dTl_CEmI0P9dUcBjorEe6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwEpDDJC0VeC8NZ6fN4AaABAg.9dTeuzpvSU89dW6FKYgx8U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxObHT_dNbniwnJDOl4AaABAg.9dT1BV-Ytc69e7bFiuNCQH","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugx6WObmpFjle_o2jHt4AaABAg.9dT-1T5ann59dTEYgB3e9a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwJQjJ3UCQ9-_NgG0d4AaABAg.9dSiMC0Hj0T9dUclW-v8AN","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyYQ-mtLbwEAK87XTl4AaABAg.9dSS38eu94l9dYP4poQzi8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]