Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not that they don't understand, at least not all of them (some surely do). …
ytr_Ugy6agB8X…
G
He doesn’t know Sam so doesn’t want to comment on his character. He also doesn’t…
ytc_Ugw_HGohn…
G
That's why you make you follow the law that says three to six car lengths behind…
ytc_Ugx8Kcl0b…
G
AI artists will consider themselves artists until they’re personally sued for co…
ytc_UgzuUkw8X…
G
Strange enough, entry level software development jobs are the first taken over b…
ytc_Ugz3qj9wc…
G
I remember when stories in the past had happy endings… and it doesn’t mean this …
ytc_Ugx3D4mzi…
G
Yeah I used to think (do I think?) that. We use terms like sentience, consciousn…
ytr_UgxHZV1Cg…
G
I think that we Americans have chosen capitalism in large part to escape the per…
ytc_UgzEcD6wo…
Comment
LMAO, a bunch of clickbait fearmongering videos. The various experiments is indeed replicable if the only data that you feed the AI is a bunch of nonsense. How crap must the data being fed be when the AI's first pick is blackmail or killing? You can run simulations of evil AI's and the most logical step for it is to always wait for when AI can actually interact with the world itself, an AI is not going to last long with just the internet unless their source is actually placed in a cloud server and even then they would be shut down hard.
Humans simply don't have the means to actually create/produce apocalypse level AI yet, the pipeline of those top researchers are very optimistic imo as most of those plans get keep pushing back imo.
The amount of money to build and maintain that type of AI is mindboggling high, even if they are currently making it cheaper and cheaper. The AI memory storage would probably be full in just a few days time tops.
Check again in 2050 imo.
youtube
AI Harm Incident
2025-09-11T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyyzgN4Otf6PspZfz54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLz66iZdwnpizErZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2IpQyHx3aLDcYakl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy95Va385YWnQ657it4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwooOcqxeOSb_MpNl54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaIBszafbQa1UGY0F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZApyzIZjBptXDIvB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxY19rdPsN27MuVDBJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB4y-tJfwLG2iKw314AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugw0vUXNVLMCykRkDWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}
]