Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This conversation coming from AI is completely absurd and those who are highly w…
ytc_Ugyg8bgQS…
G
Things will keep improving over time. The rate of change is phenomenal.🚀 Have an…
ytc_UgwGTkI1q…
G
I dont buy this narrative !! As someone who lost job to AI and competition and o…
ytc_Ugz5Wrbik…
G
4 years ago, watching this video now with AI, palantair and further government s…
ytc_UgwrpWFLs…
G
Bro we don't know the method for using technology. So I think this is the human…
ytc_UgyGJOqEU…
G
a
At the 14:00 mark, Hinton hints at (but doesn't fully express) a possible prob…
ytc_UgwLnuihN…
G
Self driving cars even as a concept is so fucking stupid. Like building a train …
ytc_UgxI-ONyJ…
G
AI will take over, just watch, this is what Trump keeps pushing which is so stup…
ytc_Ugzduwz4V…
Comment
I think that Yes, we must remain cautious about the immense progress in artificial intelligence
Yes, it could, in the worst case scenario, destroy our humanity
Yes, only humans can feel emotions, make decisions, think correctly
If one-day intelligence dominated us, how often it was wrong
We are the only ones building a future world
However, unlike video, I think that artificial intelligence will allow great advances in the future, while obviously remaining cautious.
youtube
2024-01-13T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4rAoE_l3rAklWmgB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxjpg0T_DQ_kRiK1I54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxLA2xArwXWJ3b0f0F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFwLZ616gHrJfRsZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZ8R-5R0RB9ZKlNJR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi7IEFLkLG2i_yAUh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH-yx2mwy7asW3f7l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw3fc0r6GI_qO1-lWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDKBsiteKsLVN9DPZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyTvkzbucpnXNSor2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]