Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't even bother. "workers wont adapt fast enough" with AI . Your going to lose…
ytc_Ugy9FHe_q…
G
AI can do anything any human can do at least in the next 10 years. The sad truth…
ytc_UgyUCR_Ry…
G
its not like ai art sucks, its because its so cheap, afforable and instant and t…
ytc_UgzdPD2qU…
G
Pffff.... the problem is that big corporations and interests are getting wet fro…
ytr_UgyhUrrXg…
G
The use case for AI always was solely to increase profit by replacing workers. T…
ytc_UgyFUVaJF…
G
I really appreciate this video. We need more and more big industry names to take…
ytc_UgwZkeHX0…
G
Ill be honest, id be most embarrassed because i have 5 different tails ai, and a…
ytc_UgxVVT-QX…
G
The economy will crash they will just invest profit to more robots paying 0 tax …
ytc_UgyNF48cz…
Comment
It seems to me a neural network, or any logical system, does not - by itself - have a 'goal'. Our own goals don't come from our thinking and concluding. They are innate, inherent, and deeply there regardless of anything we think or don't think. In organisms, goals are there because of natural selection. Ultimately, all our goals boil down to self preservation. In a proactive AI, we would need to similarly give it a priori goals. If, instead of 'self preservation' we make their most primal goal be 'serve human interests' this should be a good way to begin.
youtube
AI Governance
2025-06-30T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyAZJlio5IYzQPbPtJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_-p84CHuYEeQaM9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwUX5MajKdSACC1I4x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxtgfiqZ_X26Qp8WXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKvXGbwZ5f-4qJpkR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMp8M9CgkI79DaYtB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzqpqEQ_PCv65PdAiF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxx_PIequBdckkz9Kl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpUOhRQWRRUPCDMp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQL1LQmSbaW5cHDuZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]