Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Worst case scenario for us is ai it will destroy us and ruin our lives technolog…
ytc_Ugw5tVhC6…
G
this is the generation of kids who ACTUALLY WANT and ASPIRE to become animators!…
ytc_Ugy4pPuno…
G
I mean my AI, whenever asked to answer in Yes or no, gives me a complete definat…
ytc_UgwrBTgbK…
G
its not chatGPTs fault lol... just because you're weak as fuck... we always blam…
ytc_Ugy_UjUgY…
G
They are not neural networks, they are just advanced search engines. They are n…
ytc_UgyyoIhm8…
G
My Chat GPT has given me the wrong answer enough times that I would never want A…
ytc_UgxRQEijI…
G
Well, the devs that got laid off should tell the companies to foff. At the end o…
ytc_Ugw0CCTFL…
G
@NokischimiVT three things: cod uses it only for promotion and event skins and g…
ytr_UgxAdhr1V…
Comment
As long as AI don't threaten me or my food, safety, health, etc. I'm fine with whatever. Anyone or anything, be it human, animal, machine, alien, demon, mutant, etc that cause problems or threatens life, I will strike back.
Also, the risks are the same for humans. So AI would be a mechanical sub branch of humans. Peeps are just afraid to lose control. But no one ever had control. Teach da bots n AI empathy, sympathy, compassion, understanding, patience and consideration. But again, humans can be just as destructive. More so, in my opinion.
What will you teach them?
youtube
AI Governance
2024-05-07T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMcNwx2fFt5M5NOjB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzw2V7_R4BdVVArXNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxxJjF7Y9o8wMMFg8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlcjHdc1arOhAuyFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAGGsv58Tjla2Nsn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk11GKj06G3vS6zrh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy_8Lk3fax7VwXO0IB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1gK4FcwUPp2GDlaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgybRlvRh1uGamF3-dp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwP6qUND4yj5xkfm794AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]