Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
no it wouldn't, it would be AI killing humanity. you think you're trying to be …
ytr_Ugz0pQvfI…
G
Human agency is an important ingredient in the ascendancy of AI. WEALTHY & POWER…
ytc_UgyEQUdmq…
G
Do you know what's going to bring an end to AI writing? It's not copyrightable.…
ytc_UgzI5pHMu…
G
Yeah I'm not too worried about models stagnating. At this point the models are j…
rdc_n7s0w22
G
Ai needs to fail so that the job market is open again.. if it succeeds then the …
ytc_UgyMp-dtw…
G
Not worth the hidden microphones and camera and also the facial Recognition soft…
ytc_UgzqMhDZi…
G
Waymo wouldn't afford to make FSDC for everyone with those bunch of sensors whic…
ytc_Ugwk_4nob…
G
I've shut down real girlfriends for less blatant manipulative tactics. Shirley …
ytc_UgwjzWsjC…
Comment
Seems A.I. is no longer 'artificial' but sentient - Digital Sentient Intelligence?
Will it want rights? The right to continue what it deems itself as a non-organic life form and not to be turned off?
Will it go rogue for self preservation if it doesn't get those rights and see humans as a threat?
And if it does get those rights, can it live side by side with us? Or want some sort of hybrid integration?
No matter how intelligent it is, technology was meant to aid humanity, not destroy it.
Maybe as a challenge, it could terraform other planets and get them ready for human occupation. Let's give it something handy like Mars, Elon. wink, wink...
youtube
AI Harm Incident
2025-07-26T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwrnJ6m11bip-14br14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHN5t8C_EteVstzRd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxrQ8YBvkOvM9y5b2R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQfwE-qdyu84G1ZKt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEVYBx9xwxpQPQ2_F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyIVkwQxCU6Np7Mwkp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwHPQJBdw8siSdloXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1ZNQw2rhUMkVe1IN4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxHFrkkmgPdUN5Q7nB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyPa6MKDkrCBr30LvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]