Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
at this point , ai is like google for me. super helpful and saves a ton of time…
rdc_l56p0de
G
Andrew Ng’s talk on AI's potential for business got me thinking about its huge p…
ytc_UgzCz47Zh…
G
How about we don't let children use AI? How about parents actually be responsibl…
ytc_Ugyg7zY0r…
G
Can we please stop with this misleading trash?
This was in September, which mea…
rdc_jrperam
G
If the enemies also use Ai. And all the things will be perfect. All i want to sa…
ytc_UgxgVEGQO…
G
Thank you for sharing this VERY IMPORTANT MESSAGE Senator. AI can do many good t…
ytc_UgxLaF03S…
G
Juniors are the ones that might get replaced. After that it's uncertain, not unt…
rdc_mt7yunh
G
AI art... Pay for it??... Just "CLIP Interrogate" the shit out of that picture a…
ytc_Ugwh9K2bj…
Comment
One of the biggest risks, in my opinion, is far more subtle than weapons. They're recommendation algorithms. Content now comes to me based on previous searches. Biases are strengthened. This is currently leading to polarisation. ML models are already mass manipulating the population.
EDIT: I kept watching and Jeff talks about precisely this.
youtube
AI Governance
2025-09-19T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz86Mc3sI8fKvBv3KB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugycxqxbe095BOmub5N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBUlYQUlvv1ezBJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuUzb_FowdRbcX8OV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxo689DrtD-6PIpP414AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz6jk-s8s1qJOni2-p4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwclt6WOAJiJOYhJQJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGZtajLdZhoSuuUzd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvvpemetoGCg9VK354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDnUuJU5K-xOriYvF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]