Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@rickfowler6146 Mate, we aren't even a consideration. Once they have robot slave…
ytc_UgyGthz-5…
G
As I've said before in this comment section, 🤖🤡s calling themselves "AI artists"…
ytr_Ugyjhyp9p…
G
Is this the same ones that promote CryptoCurrency, BitCoin, NFTs, & AI art & tec…
ytc_UgzXLMfAi…
G
This is why one of the first things Trump did when taking office was repeal the …
ytc_UgyPtFFEB…
G
We need A.I. to do a better job than a CEO.So the pay and taxes are fairly distr…
ytc_UgxEZv1Mk…
G
I felt like I helped train AI back in the day with Mechanical Turk. There would …
ytc_UgxgeizLF…
G
Edward Cheung Actually I'm way more afraid of the war monger USA, than of Kim.
…
ytr_Ugya4kjAL…
G
This concern with AI is overblown. These are the insurmountable limitations on A…
ytc_UgyFWWHtt…
Comment
Problem: AI just makes things more complicated and creates a net negative. Now you are hiring people to deal with the AI, and our quality of life is actually suffering because of AI for a variety of reasons. It's the age old game we humans play -- try to improve things but ultimately just make things worse. We COULD use AI intelligently and responsibly, but we won't -- because we're humans.
youtube
AI Governance
2025-06-25T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0nikON3sE88TIn9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwd4tjAj_M71QHdonJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-jc2vNvHl_8oEz9p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugybxd0qI1dLUrNnJVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy89jtlKlVf7tCXt3J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZEvj9yz38Mc925I54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyRWoplruDqln8msRd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyV03yJLJ27yniXDK94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx4TxXHxoKZAhzYRk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgZAisXCROf4RI8ml4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}
]