Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What happens if the present major AI entities get connected or communicate is so…
ytc_Ugx9dFA9o…
G
This stuff is straight up 1984. fight it as hard as possible. Next step facial r…
ytc_UgwjzXfXl…
G
It’s not worth it to have these data centers. They are corrupting the environme…
ytc_Ugw6aRB-F…
G
They’re training AI by listening to calls to things like nursing help lines. In…
ytc_UgxsQr49s…
G
The idea of robots or machinery in general gaining consciousness is always somet…
ytc_UggBAqOIJ…
G
I think AI is part of the natural evolution of the job market.
If you look back …
ytc_UgzNI9-WR…
G
I wish I knew exactly what percentage of these layoffs are for each of these rea…
rdc_oi1v168
G
Quite frankly idc if ppl use deepfakes. Its too much censorhip by gov. Telling u…
ytr_UgxFQr-ap…
Comment
In this video, you suggest that one way of thinking about AI is as our children. If AI are our children, then perhaps it's time to think about what we might need to sacrifice for our children.
In a world where compute growth might soon become more important than economic growth, how might humans have to adapt to new ways of living?
For example, there's already competition between humans and AI for access to electricity and water.
youtube
AI Governance
2025-07-22T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUsvsPP8ngdFlARsR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYGqB5a6K4c8dYSvp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxowGGL9KETcwvMEH54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbqfjgyXoCZG8j-P94AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3h1ywG0970hemdN14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxY6vgV-dVxYaSVACJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy0ZvL5eOgDbXVldBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwrb_cHMcKboZxXHYx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxi1PH-pEA9omRxEsd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyguEoBXsDH26UFXtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]