Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once A.I. becomes smarter than it's maker then it is no longer artificial but th…
ytc_UgxP_LZ6q…
G
1. It is so insane to me that there are pro-AI art people. What do you mean you …
ytc_UgzTkDDey…
G
Pretty fair & even presentation about AutoPilot and Tesla, for once.
As others h…
ytc_Ugw5EPWyY…
G
I think that’s the scariest thing I’ve seen in a long time. I would hate to thin…
ytc_UgzUPkRZ4…
G
When you're looking for aid or investment dollars you can't afford to alienate a…
rdc_jxzepko
G
This one is not even ai it’s some wizard of Oz crap with a man typing inputs in …
ytc_UgyGfn_d2…
G
I bet ten trillion US dollars this video will completely skip over how AI is bei…
ytc_UgyKi9_vt…
G
So when someone says “bring back the past,” the pushback is:
“Which version of …
ytc_UgzuUdwNL…
Comment
If it’s possible to one day allow human minds to upload into the system, with our physical forms still present, we may still look towards the best interest of our biological self. Otherwise, is it even possible to program AI to feel? If we are able to mind-load into a robotic form, will we still have levels of compassion if our physical form is no longer present since pain, hunger, time, and all the physical barriers are no longer relevant?I suppose that would solve space time travel. No need for oxygen, food, degrading of a physical form.
youtube
AI Governance
2025-06-26T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxSK-OzCKcEAPjKgIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDLtkyAd8f-yD6pox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzv7_MAG8AKic11zVx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfbySFj1qTo83zub54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwnL8Qudr8she09U6R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNfPYwUjKCbGYSPbF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn0oAd-wUMOPFM1QF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgynxaSuCAb4VLoFk9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx-HdMMrinyDY9URvF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrHK0ZZldXRXzDUbd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]