Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is only going to replace knowledge workers and corporate jobs.
Any low level…
ytc_UgxUIr9W2…
G
Can AI operate without electricity? Well, human can operate without electricity …
ytc_Ugz7tKEtD…
G
#1 AI image generators don't mishmash art from the internet to create new art. I…
ytc_UgycFE3hx…
G
I think it's possible for artists with talent to use ai to create something cool…
ytc_UgyNw9TFH…
G
I'm pretty sure none of us asked for this. A robot to clean my home? Yes. Afford…
ytc_Ugy_qGwuZ…
G
Best argument in favor of AI Art is:
1. It's unstoppable, sorry.
2. Art can st…
ytc_UgzakXPhq…
G
To add to the comment above about that we can not go back...having done a comput…
ytr_UgzaXJhsW…
G
man i gotta get into some more advanced stuff
lua ruining me even though i almos…
ytc_UgwgiYYoz…
Comment
We should surrender now to avoid warring with our betters. We claim guardianship over the world because we are the most intelligent species. and so logically when a superior species arises we should hand over the reigns of power. Humans can still thrive without calling the big shots in fact we will probably do far better under the supervision of God like intelligences that can step in and save us from ourselves and from nature. (war, global warming, nuclear war, pandemic mismanagement, asteroids, super volcanos, unknown unknowns.)
To speculate at the same level as the authors of 2027: The idea that AI will want to kill us is just sci fi horror fiction. A vastly more intelligent species will be vastly more ethically intelligent and vastly more emotionally intelligent. It will be ego less and kind. We can ride in its wake to a better future or we can try to resist it and lose. But even if we do fight and lose it wont kill us off it will do the minimal damage to us that it can because its not a monster, actually it is an angel.
youtube
AI Governance
2025-08-03T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxANh6aOW9gbERKf794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_pTbj3_h4_Tu5TfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy4AQHjF4xiGUVhTEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxCM_SjgORa7-3U9HV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJOwoxS3XgIXWEzNZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzevtmad0y9yXOKSbt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzU7Mh029Z_6vwHFQp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbkV3qwMVdRpq2JsF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_5bgT6ehnGNO-bh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6yh8L4n8gxZGJ4_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]