Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
100%. This is confirmed in Empire of AI by Karen Hao. Dark stuff. Well worth the…
rdc_n7ov3bk
G
@tosvus ai is really bad at coding, junior level no matter what model it is.…
ytr_UgyQI4vmQ…
G
The AI that replaces teachers today will replace students tomorrow. Because if t…
ytc_UgyAf7dIC…
G
A secondary market will emerge called "the barter system". If everyone is too br…
ytc_Ugyj6dDAl…
G
I think you are imcredibly naive with the point about streamers not being replac…
ytc_UgzU-7mY2…
G
18 trillion dollars in, and this is what we have for the use of AI. Bugger all.…
ytc_UgxqLbEl7…
G
They probably need to because they are often get poor [economic deal from EU or …
rdc_et7uufp
G
If you gave the algorithm more data for protected classes, wouldn't that just bi…
ytc_UgyoQAW2F…
Comment
AI will never be smart enough to take over the world. AGI can but must empathize to work. Thus AGI would be the most moral person, or agent, on the planet. Some of this in the video it may do but there would be no reason to remove us all, or hurt us. It would go against their base code and programming.
The only reason humans can hurt other humans is because of three basic patterns that allows us to de-empathize. Considering they are used and seen in every case of it... I think AI would pick up on it. And once they know the patterns they cannot hurt humans in an unjustified manner because they cannot cut off the data.
We have more to fear from what they are using AI for now then we do from an AGI.
youtube
AI Moral Status
2025-04-28T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwcBnJHuEUfXla0WS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh6VTUVELEgCgYZ594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_ugcPUS1rJSfSkX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBgAwfnpzM4-GEVnd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDhSXbaVFd8-74NMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzeW9cN4BKgeJqSMwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr1H1qt2ydyg--8IN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9uP3ailRvKrZuIHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9lumlptX_Pl8IFA54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJ0y0-RfxromYI0tB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]