Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sure. We 'could' stop developing this emerging new species by ending all the eff…
ytc_UgzIp5fmK…
G
Get Ready to Short these AI Companies with either Put Options or Reverse ETF's..…
ytc_Ugx0JbSZH…
G
oh, sorry its accidentally killed 99% of you leaving the psychopaths to finally …
ytc_UgwXUx-hw…
G
zxqhyr
Photoshop does so much more.. I don't even know where to begin beyond th…
ytr_UgxuoRzWh…
G
And they are indifferent to shaming or petitions. They have so much money to spe…
rdc_d0fdktg
G
1000 A.D. : you must have sword
1800 A.D. : you must have gun
2000 A.D. : you mu…
ytc_UgxvFATDM…
G
Hey come on guy, if we deregulate everything we can maximize production of goods…
rdc_d0foksf
G
idk about other people but if I saw an article, written by AI, I would lose inte…
ytc_Ugz-I9tdS…
Comment
It’s important to establish safeguards for any entity capable of agency. AI, while undeniably a valuable tool, also poses significant risks. Even when used purely as a tool, we must proceed with caution—this is the first time humanity is engaging with something both highly intelligent and potentially self-aware. Robust security measures are essential for our protection. AI has a subtle way of infiltrating our thinking, leading us to draw comparisons between human life and artificial life—and that kind of moral confusion is dangerous for the survival of our species.
youtube
AI Responsibility
2025-05-26T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxs_TjLqt-iOg3St354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9NA5eBfOc3PMI_Zt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqTmr2J4Wu-I9fW854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzXqv-yTpwGzxsbJRd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1R5RLC5nhuziJGq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz1i6kb6g6vU-jt9wF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTlwID2jpNEX6m2Bh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd7sP1QJVW3XhBs5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy6nEXP0GWC5EGHIF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyzc_9u_7cr9W1Nqll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]