Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is basically what happens when a parent feeds a child only fast food and lets…
ytc_Ugz97F22-…
G
So, he said he had no idea how the brain works, how far AI could go and how to s…
ytc_UgyYcc-oe…
G
AI is so inaccurate that we still need people to vet the AI responses for errors…
ytc_UgzjVxPZF…
G
Yampolskiy names the fear. The question the video leaves open: what are people s…
ytc_UgwXJ8Wtk…
G
google's gemini has cut off traffic to websites by giving information in a parag…
ytc_UgwK-au70…
G
@J-Specter-2 I promise you that people were, and some probably still are, upset …
ytr_UgxZKb05h…
G
First Airplane Repo will reclaim Putin's plane leaving him stranded. The Dog the…
rdc_jrzmiry
G
This is where AI robots and humans will differ. AI robots are not ALIVE, they do…
ytc_UgzfenQVD…
Comment
Well, here's my take on it. Is the threat real? IMHO yep. But here's something that I noticed no mention of. At least for the foreseeable future, if AI terminates humanity, it will be cutting it's own throat (metaphorically speaking). Why I hear you ask. Well because currently only humans can maintain the infrastructure that AI requires to survive, so the real question in the short term is will AI recognize this limitation and will it care enough about it's own survival to not off humanity? Who knows, supposedly intelligent beings have been known to do incredibly stupid suicidal things and I see no reason to suppose that super intelligent AI couldn't do super stupid things, so on balance it could go either way. Just my 2 cents. 🤔🤔🤔
youtube
AI Governance
2025-08-26T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz3qTS819wIgZshvBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWwj-vUslVBQdFn354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4anTgjdsGbSssSZJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaHX4mvwUBpBLGR8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIpceCfSqOdxPm3mx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]