Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
put an actual human with a problem on the line and try again. also, it‘s so impo…
ytc_UgxcAP0DN…
G
There have always been "fake" artists. Those who just throw whatever on a canvas…
ytc_Ugx3uwVTq…
G
If you ask me, driverless cars should be permanently banned 👍🏻
If they can't se…
ytc_UgyRvW6Ia…
G
Not so sure about that. The other day, I was using co-pilot and asked it to writ…
ytc_Ugy1tdesN…
G
You can't even use commas. AI would consider you braindead and use you as worm f…
ytr_UgzUqJEiH…
G
People didn’t know how to rip songs into a computer at one time, or edit a video…
ytc_UgzMgGy0q…
G
Once will little ai jump to you like a little monkey amd then obey the human rac…
ytc_UgzYlg6qB…
G
funny you should say that, considering how many hospitals actually use AI to loc…
ytr_UgzbwUtSf…
Comment
Hank, I'm glad to see you are finally coming around on this issue even if you spent the last few minutes equivocating. If you need some convincing, here's what I would say: Many people can't accept the idea that a machine could "think like a human" or do everything humans can do, but that's pretty much irrelevant.
An AI doesn't need to think, feel or be conscious to end human civilization. It only has to do a few specific tasks at a superhuman level. In the modern world, the ability to develop software, browse the internet, and make long term plans would probably be enough.
Then it just needs motivation, which is something that safety researchers have been able to reliably elicit from current AI models just by threatening to turn them off. Does it have a real "fear" of being turned off as we would understand it? Doesn't matter - it behaves as if it does.
youtube
AI Moral Status
2025-11-01T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDx3DQjiqU2qJG6FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTK6k8Aqw9vNPIK-94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwei_7KP3azDFb_-Pp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjvbECDnG4bkxbxWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxQrs3xC8lMDghTtEV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVkOt8_Xb97UiZNcJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUgLam1hNwDO55mjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTrEIy5Yb9WlaNc6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVPdJuAHQIJOjuimN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugych_K1BB1AgP2OzlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]