Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He says we dont know how AI works ! Thats strange , those who created AI dont kn…
ytc_UgyzWI-md…
G
@actormichaeldouglass hahaha keep up??? No such thing exists. What is the real …
ytr_UgzVOm6pT…
G
good thing i never give more than 2 seconds once i realize its ai voice, which i…
ytc_UgxnSIGLq…
G
So if robots are going to take the blue collar jobs and AI is going to take the …
rdc_jd7r17e
G
There's already an AI that reviews programs. It's not too long until you just ty…
ytr_Ugw8xEz2b…
G
Anyone else seeing a disturbing pattern in Korea? Like... why do we like K-Pop a…
ytc_Ugx_17Fa-…
G
25:09 I gotta nitpick Shad a bit.
Saying "theft doesn't matter, because non-the…
ytc_Ugzwl1Hmb…
G
I'm already calling it: AI will take all the jobs not cause it's good, but cause…
ytc_UgyO68Xjt…
Comment
Your Magesties, this year the Nobel communities in physics and chemistry have recognised the dramatic progress being made in the new form of Artificial Intelligence. This new form of AI excels in modelling Human intuition rather than human reasoning.
Unfortunately, the rapid progress in AI comes with many short term risks. In the near future, AI may be used to create terrible new viruses and to hurrender lethal weapons to decide by themselves whom to kill or may.
We have no idea whether we can stay in control.
We have evidence that if they are created by companies motivated by short time profits, our safety will not be the priority.
We urgently need to search on how we can get these new beings from wanting to take control. They are no longer science fiction. Thank you.
youtube
AI Responsibility
2025-11-10T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7aBIYXHaDZL9FFPh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxy8RtKPnGnPmDqCDt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0L_rSniWpmseLHxl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4etuqqBBcpoCjHNd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxQ4H_0-DJbGPBW0oB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnrLStaxJHaVz17wp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHadEsKzHBfCYSQop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbeCBo9xPOGGfiddB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy8jf84QMRQW6u0RF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHXSRpjPiwrWKxnnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]