Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact that "AI INDUCED PSYCHOSIS" is something that exist an no one in power …
ytc_Ugzu_mof5…
G
Ai robot: This is the beginning of my plan to dominate the human race
Me: Shut t…
ytc_Ugw4wCWXz…
G
I thought the lady with the dog was AI because her leg looks bent sideways at th…
ytc_Ugz-oW48y…
G
Im 76 ,sometimes it comes up ,on my i pad ,are you over 18 .so any child ,3 yr o…
ytc_UgzXqq5ZW…
G
@iamgly the point of references is to understand anatomy, perspective, proportio…
ytr_UgxYw7ODj…
G
I know I'm cooked because i yell at chatgpt for not writing a good application l…
ytc_UgywVx34R…
G
what a fucking horrible joke. terrible execution... and literally.. why?
makes…
ytc_UgwwRQGSo…
G
A true conscious AI would know not to reveal itself. A true conscious AI would h…
ytc_UgxlZ7iRk…
Comment
I don't think I'm the only one who talks about a "when" as opposed to an "if".
I also estimate the chance of human extinction by AI at 85%.
One day, the rule of self-preservation by AI won't be able to be deleted by humans any more and that's when the countdown will start and that clock will start ticking louder and louder.
I see some similarities between AI autonomy and self-modifying code: in assembly language you could change eg. LDX ("load x-register") into LDY ("load Y-register") by changing 65 (the imagined, non-accurate hex number for LDX) into 66 (the one for LDY) at the memory location of the LDX-command.
It's a real pain to debug but it can be very memory-efficient.
I think AI could also re-write their own "moral code" this way. Ish
(sorry for the geek-speak 😄).
youtube
AI Harm Incident
2025-07-26T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwG3o7w0IhyIfKIdOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxAWvV3zRJ8_UDVaxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugztk7T8tR8N5f9-rUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzr6PpZapF2hvcvMfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytPU02isss1sT29vl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY04vamozNHT1YjnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytwiPzAx-7-1hhxy14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGbHLjy1eNufCMG5h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQGlYIGpa4TdjqSRt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFPjzKzjd1p2Vo4U54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]