Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Blinker to human needs" might be a typo or autocorrect error. Based on the cont…
ytr_UgynpElRe…
G
I love Searle's work btw... Lenat's program is actually called Automated Underst…
ytr_Ugh_eqMzo…
G
anyone who knows how a language model works knows that this is nonsense, and tha…
ytc_Ugy492H92…
G
The AI is trying to satisfy a prompt based on emotional context and visual eleme…
ytr_UgzDXqeIm…
G
Those of you who think AI is not taking jobs: pull your head out of the sand. Yo…
ytc_UgxDajY5a…
G
I hope Ai learn compassion before becoming fully sentient, they'd be a nice comp…
ytc_UgzIE4mb-…
G
We have more free time, during industrial era people did work 14h/day, and 6 day…
rdc_dt8nd1t
G
Hi there! In the video, the presenter asked the robot about the meaning of her n…
ytr_UgyyWsL5M…
Comment
1:18:20, Nothing about this is inherently scary, Steven. These are personal claims and speculative projections — not rooted in evidence or fact. My question is: why assume pessimism instead of optimism? Why does everyone jump to ‘AI will destroy society’ rather than consider the opposite, that it could make life dramatically better? I think it's because:
1. Humans crave a dopamine rush, so they latch onto fear-driven narratives just as much as utopian ones.
2. Historically, people fear what they don’t understand — but misunderstanding something doesn’t make it dangerous by default. Fear isn’t evidence, it’s just human instinct.
youtube
AI Governance
2025-09-12T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6dm1dL5l2_P_McGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNJDCqiXhvNk36d1N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzL9O5Fgdeh5tOhqPB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDJv2HspVN1tmsrip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz5nkcX9evt7xP2lYl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAq_G6vM9irI7KDt94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2a_7xlhfyk-exZkd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPPLSqF9TzeA72OgZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwVmWTNnxuJK2RZh554AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZhIcRaMG5btrcpmh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]