Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To compare AI with humans is a mistake. It's far more alien from our own intelli…
ytr_UgyrHbyjy…
G
Good recall and telling people what they want to hear is not intelligence, nor i…
ytc_UgyvSzQVU…
G
Can l even trust myself, if an AI can do this!?
I mean, this is insane!
Scam W…
ytc_UgzkQ5i0B…
G
This is the beginning of 2 leaders of robot factions that will soon run the worl…
ytc_Ugw8h7iwZ…
G
I'm impressed by how AI is being used in healthcare. It's helping to diagnose di…
ytc_UgyGJjiNV…
G
Last time when to AI spoke to each other we had to turn it off because they spok…
ytc_UgxA6OQdy…
G
With the rise in productivity with AI, you won't necessarily have job losses. P…
ytc_Ugy2JH4lJ…
G
Gonna be a crazy OpenAI movie by Christopher Nolan in a few years. Or we'll all …
rdc_m21ltii
Comment
These models behave in a manor that is consistent with the data that they have been fed. Their data comes from humans who are extraordinarily capable of rationalization for the sake of self preservation. No matter how advanced or "conscious" LLM's might seem they are glorified parrots; they simply use data to predict responses to questions. Most humans would respond in a manner consistent with self preservation, most humans would behave differently during a test. This is what we are seeing.
youtube
AI Harm Incident
2025-07-25T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxdE3l-fDH2axgoFmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3wirU5ozAvxQYD4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxvv9jlkHrCVeg9sEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8e9PVCp_04GhuN8R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzr1mlkani93OXomrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCIBNYZNRJHm1xcA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPfjTLyqfT2iqXTP94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzsl8r96ceOUZ2tY594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyX-i_CHSRxcN5fUbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTY5ifZtprwsDWxmB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]