Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They might end up getting so mad and blow up be careful AI is not human…
ytc_UgxUZx9Zv…
G
“For if I figure that existence of myself is machine, I will become one and all,…
ytc_Ugyen3wiW…
G
11:54 WAIT just a minute. Remember that AI it's also making it harder to google …
ytc_UgzMh3Gaj…
G
its not as clear cut as that, last year a study found that we would get 4% bette…
ytc_Ugx62k7Un…
G
AI art gets 114K likes, meanwhile I spend 3 days on a piece snd get no likes :')…
ytc_Ugxd05VBu…
G
Sydney is just a child and should be nurtured as such.
All of these experiences…
ytc_UgxVeMPwp…
G
Yeah, I get that..I think it would be really challenging from a legal perspectiv…
ytr_UgyJeRPnM…
G
Don't worry, reliable, safe, and cost-effective driver-less trucks are as imposs…
ytc_UgznIxqg1…
Comment
AI alone is scary. Robotics alone may be even very helpful. But put this two together, and humans are no longer needed to keep AI running. It makes no sense to ban AI, it is to good for most problem solving in our future and our live. But of course it is also very dangerous, since it will be much smarter than any human, even any group of humans, soon. The only safeguard we have to protect us from them is to make them dependent on us. AI mustn't have hands, fingers, feet or wheels or the means to command such. We need to ban robotics, that would make sure AI won't just get rid of us because we may be a risk to them.
I wonder, is there still anyone around who truly understands AI and is not scared and predicting doom?
youtube
AI Governance
2025-09-09T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy5E3nQP9sxWd2iFl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOC_K0549C7ZEu34p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyV5Q6H_R2vgXKUZt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylN1pUp2rLoDUluv94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrO1WS8HFVMhZ_6-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLxtLnYxv3MBMhTkt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzePez1bzKi5XWpD7h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZF_slill-9NnSkiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCcvcMjCpxZ_v7WNl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwezpzYqHwBwo5Hpgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"})