Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You explained the technology well enough but It's really not fair to equate Waym…
ytc_UgxBBVOKe…
G
Anyone who understands why he said that will be as scared as him. AI is dangerou…
ytr_UgzlTNpaQ…
G
Yup real the girl thinks she is a real person but she is an ai😂😂😂😂😂😮😮…
ytc_Ugy_5eFGQ…
G
what pisses me off is using technology to diminish dangerous work or do things h…
ytc_UgwRAay5o…
G
It really is funny I can name a few who defend AI-prompting calling themselves a…
ytc_UgzcoHM4d…
G
Funny I find this video now. I was helping my thesis advisor respond to an email…
ytc_UgymR7wF4…
G
You’ll lose your drinking water. Showering water. Water for your pets, garden, l…
ytc_UgxiAANjY…
G
@Gintamahosen AI is better than you, too. You are not above everyone as you po…
ytr_UgzDKUDgB…
Comment
If tech is led by nonTech people this leads to catastrophic damages in few years. AI is a tool and every tool has purpose, but no tool can ever replace what people have more important accountability. I see AI as pure risk of human prosperity not because I don't like AI, but because it gets easily in hands of bad guys who have zero barrier. If you have nuclear weapons you cannot get it easily in shop, AI you can just pay. The damage bad guys can cause is equally catastrophic like with nuclear weapons. They can hack these weapons and use them (maybe not today but in 1-5years). They don't need to buy nor smuggle they just control them since all requires chips, code and communication and this can be hacked.
youtube
AI Jobs
2026-04-25T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzMkW5VfqkNei2Nm794AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbYqwVB_etC-L4Lsl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUcIpVrE00Lft9ewV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxxC2V3ve6KcztukV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFjiLY2GOVi_SsrzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwprkjtHrbHiabUozN4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmTJqm-kGEr2oPg1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-JEl0_Nr03tSW5aJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpzcQSEVO577jH0ll4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjqfzexvWz0qgX-eh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]