Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it really wise to develop them when earths resources are getting scarce, ther…
ytc_Ugz4efkJe…
G
I can't wait for a world where humanoid robots call a robo taxi to catch a UAV f…
ytc_UgwNoGQ60…
G
AI art shouldn’t replace creativity, the one thing humans excel at. For example,…
ytc_UgzTvtBIu…
G
I was trying to force Gemini to tell me whether it would start a war or settle f…
ytc_UgwHLGe2a…
G
@invisibl3.no it does not. if you tell an ai, “make whatever art you want 😊” it’…
ytr_UgwhT7UjQ…
G
The problem is capitalism... (or at least the bullshit mythologised unregulated …
ytc_UgzN_dzhy…
G
People are losing jobs today due to AI, yet none of those billionaires are givin…
ytr_Ugx8iyxvz…
G
I do digital art, and it can take me from 3 hours to 31 hours, and once i got ov…
ytc_UgzQaLbLf…
Comment
A.I. is a idiot, Ask google AI if it can do math and it will tell you "no". It can only give you information that someone programmed it to tell you, it can't think for it's self. It's basically a computer that response according to it's programmers beliefs. I should know I argue with it all the time, usually till it will give me the right answers. A good example is this Iran War Trump just started for no reason, Trump called it a War, the news is calling it a war, but Trump is also calling it something else. well A.I. won't call it a War even though the rest of the world is calling it that. So that would be the programmers fault, it also means that they are Republican MAGA.
youtube
AI Harm Incident
2026-03-19T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy2B_Gx1wI_vaWos354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWiPs9ViCqk1kZmX54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgX7__dEhB8JyLFcd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxb_afLiQBOAgTQM2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIr5Amamo8i8gKZS54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXmcmvjwobVCLBlaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgziXyxN1ycWQHJjEqR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvWnJtNIWpjlDDbA54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwMxPaqng5OBIseGKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzTVrf6nFyvu7yji-J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]