Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is hype BS, AI accelerates but doesn't really replace much.
And the real …
rdc_m27sofa
G
Doremon ka woh robot attacking on human wala movie yaad aa rha h vyy..!! 😅…
ytc_Ugy6zAPW0…
G
Here he is wrong, internet mobile are different, AI will make people jobless but…
ytc_UgwvaBTP-…
G
Imagine an *Ai camera* with a *gun* just recognizing faces and killing people ,,…
ytc_UgzZM5s24…
G
How are you supposed to afford food and your house if ai does all the work…
ytc_UgyBrhjhB…
G
"We have no idea how AI actually works." Stupidity number one. Then we train it …
ytc_UgzuU0OpW…
G
Self-aware autonomous AI will be treated the same way humans treat every neural …
rdc_gd893cq
G
aliceuwu1013 I'll have to check it out sometime. Personally I'm fine with AI mus…
ytr_Ugy-ktkB3…
Comment
@CrimeSpree-u4u The ai will not decide that if its not programmed/grown that way. And also there is not only one ai. And it is likely posible to make AGI safe. Agents are trained on objectives. Observing the ethical behavior of other agents itself can be such an objective. There are many ways to make multiagentic systems safe. The problem are corrupt people.
youtube
Viral AI Reaction
2025-11-22T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxWyDv3-G2_xCLsAQZ4AaABAg.APqGj-0Oc05APyz5tfFHIC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxa4pt7EjAywSLnjd54AaABAg.APqGaiufUXVAPqMiEt_2pa","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy9_YeIW9UtNWrQR_l4AaABAg.APqGHxZQooxAPqIGd4f-Ov","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugx_zeqoXRwSbW0onll4AaABAg.APqG9P2clLOAPqYQs6t53L","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxSfjnGft9qyHe_XgR4AaABAg.APqFvvHtUh8APqd9gAnwTP","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqQ8tAcLLt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqSynu1jdG","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqVVU21i4A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwMLuMOhXITvm2hQiZ4AaABAg.APqF9n_I1MoAPqPODE_45L","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyZD0PcHylcoKo2RN14AaABAg.APqEWwMo6JOAPqH1A1jiri","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]