Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes. Everyone who works with AI knows what it is, how it works and what its limi…
ytc_Ugzhue-j2…
G
I agree that that tech needs to be fixed, badly. Facial recognition is thus far …
ytc_UgyaLCpXB…
G
LLMs determine their responses by a web of associations with numerical values. t…
ytr_UgwavInkJ…
G
Companies just like to use it as an excuse to cut costs. I see no real value fro…
ytc_Ugx7YRmEO…
G
WE SHOULD INVEST HEAVILY ON ARTIFICIAL INTELLIGENCE IF WE WANT TO BE SUPER POWER…
ytc_UgxNaVrND…
G
My Human - AI labor system is this, we could buy AI robots to be hired by compa…
ytc_Ugy9I0FEW…
G
Exactly, I’ve felt unwell today,I carried on working, made mistakes and was slow…
ytr_Ugzq9RsYd…
G
Why there is need of human like ai in the first place if there are about 8 billi…
ytc_UgyS7fNyB…
Comment
It's fascinating.
You laugh at people that got themselwes honeypotted into dating an ai, due to them unknowingly exploiting its tendency to be agreeable to everything, and then what? ask that same ai some questions and announce armageddon?
LLMs aren't sentient not because of their lack of pure computing power, or suitable training data, or bacause they aren't sufficiently advanced yet. They aren't sentient precisely bacause they are LLMs. And asking what is essentially a poor replica of a but a single part of human brain - language understanding - and sumutaniously gaslighting it into giving you a wild answer, then saying "WOW! a LLM says we got a 65% chance to die! It's surely right!"... Brother. Look at yourself.
youtube
AI Harm Incident
2025-10-12T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugyixu4KgX1Z6d-sWA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlhNo1leyTt6gOyrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwPGAVKMaFKfyXXRph4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNsO6nG0nqNghwO6h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMuRH3CCp-MsoJ3_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf9YPU_Ci65bAXahN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxf0OoAaMH7E8N7Rmh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxXRYqr_SDNGg__YKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwdHKDduXxBkJvTA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxuY9IdbYepiPupMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]