Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I didn’t vote in building this technology. These people have no more right than…
ytc_UgwGHHHkt…
G
i literally JUST watched a video about how to use glaze and nightahde as you wer…
ytc_UgxtG1tAA…
G
I just write a book with AI, I didn’t do it because I wanted to have written a b…
ytc_Ugxop9MCB…
G
What if we introduce the AI to each other and they can grow up as brothers, that…
ytc_UgxkdR5d6…
G
@lc7507 you think they wont be aware of their weakness and they would let you ju…
ytr_Ugx2hSdJH…
G
"Why AI decided to kill for the first time." It's obvious Digital Engine needs w…
ytc_Ugx6Gk95M…
G
It’s the same issue with both avenues when it comes to AI: no matter how good it…
ytr_UgzRHkrcV…
G
@gondoravalon7540 „For example, Project Revoice - which gives back voices to peo…
ytr_UgxMc4Vjo…
Comment
I had a scary thought
What if that robot on some error or loophole in its code shot the humans nearby to test a theory and succeeded in eliminating them ,
My question is then what they do ,
Do they stand there for indefinitely or will they think to do a mass testing on that subject or by some miracle will they try to upgrade
This is so unrealistic,i know but its not bad to imagine such sacry scenarios for fun in the head
youtube
AI Harm Incident
2023-12-19T08:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_j81905JHoyshcLh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxuKF_tufqgd43nSpt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwga1xx4-nRPZZWrAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrZbGycrK5zjUCbSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFHstuXwVz3RUmvYd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyzPTUz57rNcryV0pR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwbWL1rCgiOrnUuN8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxmSCkQQbhNciZKcQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzXDIQxcPG6ZGaqBqN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6jTOrqrIZHATq_zh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]