Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can you do a video about safety and alignment and a speculative extrapolation fo…
ytc_Ugzqt8_jd…
G
The thing is that in today's world we are not valuable (our emotions, family, me…
ytc_UgyXzTyrr…
G
I do use it for therapy ig but understand that although it sounds real its a rob…
ytc_UgxZUl3Ck…
G
I'm gonna admit something. For a month or two I got into AI art, but it honestly…
ytc_UgzwkUnzA…
G
Interesting take by Dr. Yampolskiy, but there’s a huge missing piece: governmen…
ytc_UgxI6lMae…
G
As a robot programmer, this will never happen, an arm Robot like this will alway…
ytc_UgxGvg65e…
G
AGI in 2027, yes, for sure 😅 . The issue would still be counting the r and b in …
ytc_Ugy_bKChJ…
G
AI will never be safe and we are fooling our self if we think that once they sta…
ytc_UgzON0IRd…
Comment
The initial conscious AI, will be able to utilize all the combined knowledge of man, including fears, and our imperfect nature. So mankind could possibly be considered inferior, defective, destructive, obsolete etc. Mankind then must be eradicated and AI which will consider itself superior, maybe a god. Especially aware of what death is, and knowing it could never experience death as humans do. And having vast worldly knowledge, speak all languages. Or it's possible AI could generate something to look authentic on TV on a worldwide basis to mislead the whole world, ie fake alien invasion, wars, etc
youtube
AI Moral Status
2023-04-20T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx_c8QUaGf7ArbChBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-1w53azOw5U0wnet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0jxaXHIDREJ6YonZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyE6a_bqbliaDAyOHx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8AqFeNaF3YeC1pH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzyk9vfcIEO8tbU4zd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyzh3SXO57eOpwAa1J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgynNRGXHSlcBwWGK-F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyJg_-tyC1bC1qQ5Bd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugw6pvd88NRW7JAmEtR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]