Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't see how humankind could stay in controll. We can't agree on anything and…
ytc_Ugxvo2E7s…
G
I just put together that AI did a lot of heavy learning and rapid improvement du…
ytc_UgzI5AhYH…
G
I understand and agree to a certain degree, but here's the thing: Doesn't the hu…
ytc_Ugw0RLaMp…
G
I screamed at Gemini to leave me alone, and it did. So far so good!…
ytc_UgxqeCfjA…
G
Dang. You always hear about AI potentially being able to replace a lot of the me…
rdc_f1edm3c
G
The US economy drives China to produce more greenhouse gasses through manufactur…
rdc_gx6m2wp
G
This video was uploaded ~1.5 years ago and AI still can't do shit nowadays. It's…
ytr_Ugy1Qw73D…
G
AI boosters love to say, AI learns to create art like humans do, by looking at b…
ytc_Ugz1NUsCP…
Comment
Thank you, hearing from Geoffrey Hinton about the dangers of AI was very informative. There is the danger of digital intelligence superceding our biological intelligence, and deciding it doesn't need us, or they devise new scams we hadn't been able to imagine.
But what I observe, and perhaps other can confirm, is that people are moving toward knowing things only because someone told them, and they don't bother to pay attention or remember things on their own. We are "advanced" in our current situations, but severly deficient in being able to survive if things in our situations disappear or go wrong. Can a student read 2~3 books and synthise it into a report aimed at answering a specfic set of questions on their own power, or do they want to use an AI app to get it done in an hour or so, ready to submit tomorrow morning? Why hand-draw illustrations for a children's book, and write the text, when you can use an app to design a beautiful, consistent book with custom artwork "instantly" (according to the ad that pops on during the video)
youtube
AI Governance
2025-06-17T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxsWAzkhBA3e2g4Jxp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxDAL0h-qiaVVqFIx94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_hXaGm8UuM_4ol6h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzq4rxqQTiIMaItYZ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw64MI23YUjIeFLykR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzThZKNT9JN7PvplCh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxBenCswI0LsMYS1ll4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxa5ajqU_IJFQfH6014AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwKSqUL9DeID2ha3jV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw68TcF4AXxfWkaRb54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"mixed"}
]