Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For repetitive tasks, AI is useful, but Axalem enhances my critical thinking whe…
ytc_Ugw1A1EDo…
G
Has anyone noticed how atrociously a lot of the AI scientists dress? Do you real…
ytc_Ugyekh9FE…
G
If development testing for these self-driving cars get a ban, this pedestrian is…
ytc_UgyRLMUoU…
G
People complaining about immigrants taking their jobs, when all these big compan…
ytc_UgxC7zkJh…
G
I'm disabled, and I can tell you, I miss work. I miss routine. I miss the abilit…
ytc_UgwU4rfVI…
G
1:21:27 She's missing something here. There is a justification for paying worker…
ytc_UgwJ_tYSN…
G
Treat that AI as another human artist and apply the same copyright rule to the A…
ytc_UgwMU1uQE…
G
fruity looking people going after less but still fruity people for using ai (the…
ytc_Ugw6XMpMW…
Comment
All of these scenarios never talk about the possibility of empowering the human brain. Through micro cip, epigenetics, pharmacology, etc. AI has great potential but not yet the efficiency of the human brain. As well as flexibility and adaptability. The question is, will we have the other man? The fact that AI does all the work is great for automation, but it has to be in whose hands for a utopia to happen instead of a dystopia. Man can always seek and try to do more and at his best. It doesn't have to become a pirgo in front of the car, but it has to challenge it and look for where you can be better. Indeed, a machine in power is devastating, but in other contexts is this always optimal?
youtube
AI Governance
2025-12-25T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2_CJvDO7avNVQU_14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwXivdZvM8hADyWarR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzkv_1wFzZTqMlBJiJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyB1mlBpu7djDLwkmV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0N6aTRY_RJoNzFxV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzmzH7WNug8aMMgp-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5elZlPMJ9FUwCl0Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugys6WZ1rS3IJJK_0jh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDH42lQKsR8eb0du14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgxcX471cqunnmL-TbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]