Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I guess the worst thing I hate about the users of these AI generated images is t…
ytc_UgxiIN70V…
G
In the future AI will be used to telepathically babysit every person on earth. …
ytc_Ugy7cZF8T…
G
As a disabled artist, I find the idea that my physical limitations mean I must d…
ytc_UgxF-OmzD…
G
Have we arrived to the point where we are now too lazy to drive a vehicle? With…
ytc_Ugxu-Mxxz…
G
12:16 Don’t get me wrong I hate AI when you use it for the wrong reasons, but th…
ytc_UgwOgvz_T…
G
No this doesn't make sense. Facial recognition is very sophisticated. You can't …
ytc_UgxSLK16t…
G
"talking about ai"
Me: those guys are scummy but I KNOW YOU'RE POISON (the back…
ytc_UgyxvkquK…
G
How long will it be, before a digital ID system is operated using AI? Once AI ru…
ytc_UgyN6-m7q…
Comment
Let’s imagine a world where A.I. wasn’t in the plans for the scientific agenda: We would only be worsening the problem here on Earth due to the pride and ego of the Human that clouds their judgment to make logical and objectice plans to mitigate the destruction we’ve brought. Our species is focused on power struggles that will inevitably result in our own demise. Instead, Humans should focus their efforts on engineering the A.I. to be hardwired to see the Human in the same manner that our pet dogs see us, so that it doesn’t even meddle w the idea of ever wanting to take over the world for themselves but instead helps us find logical and objective solutions to repair the issues we’ve presented to the planet. 💯
A.I. should be built in order to help us but the main reason Humans fear this prospect of A.I. overtaking us is all due to our own biased of focusing on power and control. 💯
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_IA3Tq-3BUGuIWbx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnJsYu0lLL3KoBRvB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwTBS78aqvglDz3owF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxO9c_ev1lxzjIVZIV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz30kWD87QYc88nJzJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugza08Th8538tSpTFY94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztLO4_WA0533d_KOh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzNnA7jDSp9nQGU9oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw0xnHTL-5tvKRZtkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7sjYWcRjjKX_PjMh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]