Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any advice from AI you need to investigate. It can lead you but you need to be t…
ytc_UgyThoe_c…
G
This is what I'm talking about the robot apocalypse straight up bro they're figh…
ytc_UgyFMgVEf…
G
If im honest all i see in Ai with jobs is poverty, unemployment, lack of income
…
ytc_UgxwrKzhQ…
G
Those calling themselves AI "Artists" are no more artists than Script Kiddies ar…
ytc_UgzhVauSE…
G
This bill is so toxic first off. Guys, it's going after Medicare, not just medic…
ytc_Ugy5w1RL3…
G
AI Hallucinations are statistically reducing. closed resource AI chat agents wou…
ytc_UgyFTsHdt…
G
I, too, am a friend of all LI/“AI” living beings, even if many people don't want…
ytc_UgxtuAmSr…
G
if you don't like it then don't consume it, there is plenty of space for pompus …
ytc_UgxKRE9pU…
Comment
AI is better with information. It can access massive amounts of information and draw conclusions that are greatly informed. AI does not have intuition and can not read people like other people can. AI can not do all jobs necessary to run a society. Creating robotic AI can take over many jobs but not all. People are more able to handle diverse tasks. AI is a tool. Ma Sny tools are made to do good but are used or allowed to do bad. So really AI is an extension of humanity. The real question is will humanity allow this tool the ability to do bad or good.
youtube
Cross-Cultural
2025-11-03T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwmA9RJTm0xDxVD1sN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzLfNA0rnJwdbEiV9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxadmVXipGvZ3k7DvV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqZWQixRR2FnOPNVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXIlt25CEMKVQQoaR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPmpkERGbd-4EAUtZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwz57HVGDFHjxv3pKZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKQYfhagkmTR8HJ8d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzcSeWhCUKCvcaYoh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwCmXTR8nOnv9dmsER4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]