Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an IT-guy, I find AI tools, including AI art generation, extremely fascinatin…
ytc_UgxUiwg3J…
G
When my A.I on my phone takes me out to dinner or for drinks and it pays, I will…
ytc_Ugz1SOnAk…
G
> AI-enabled robots should be able to vote, considering their decisions would…
rdc_dy4jakb
G
These people are sick,..even have the guts to tell the world of the major proble…
ytc_Ugz-BWuxk…
G
I have seen some videos here on YouTube have a AI disclaimer but it's way at the…
ytc_UgxRCYtCw…
G
I love the obituary written by an AI that occasionally makes the rounds. No mat…
rdc_jk7si5n
G
Another point I dont think we are paying attention to is the generative AI marke…
ytc_Ugy9KzLh_…
G
I don't think this proves AI can't detect emotions. It proves people can't detec…
ytc_Ugzweb9kQ…
Comment
All AI issues are very much a first world problem. There are millions of people across the world who struggle to find enough food to get them through the day, or who lack clean drinking water, or a safe place to sleep at night. For these people, AI is irrelevant. When claims are made that societies will collapse due to AI, this only really applies to the western world, with our technology and home comforts. It would have been far better to spend the insane amount of money that has been ploughed into AI into coming up with solutions to help those in the overlooked, poorer part of the world. Instead, we worry that AI will destroy our way of life in the west and create nonsensical doomsday scenarios of the end of the world. We have the power to turn the AI off. Perhaps it is time that we should.
youtube
AI Governance
2025-09-05T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkrVSAXlvrr2LuqUx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrKr2jzap3Vn85L0R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_HyYJcZtJX3EXp7B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXU-JTzurBSI06Hs94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzac_oesFOJ9aUZinh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvcWQBP7TrsYMnLXp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwwzadRwUpDr0snHwl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbgGPyF4Rb7CpA6gl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQx1-mi5exR09et5t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9nd90ntLrvyItpvJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]