Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And for lower-quality work, too. "Good enough", they'll say, not realizing how b…
ytc_UgznS5zd0…
G
Is there ANYONE who actually thinks that AI thinks by its own? AI was programmed…
ytc_UgwVKM-MP…
G
As with all risk (which is basically on a scale of probability) it has to be bal…
ytc_UgyVQyIIX…
G
I've hired perhaps a hundred developers in my career, and interviewed and reject…
ytc_Ugx9jWN5y…
G
UBI- we, the people, should be paid for ALL THE DATA that has been harvested fro…
ytc_Ugwo3LUQq…
G
This dude NEVER shuts up, save your time the ‘priceless’ pancake is a lady makin…
ytc_UgzvRf16W…
G
This could be an opportunity for creatives. If artists got paid to curate datase…
ytc_Ugzi3cQMe…
G
All you see is teeth flying! The robot probably in the corner like: now go pick …
ytc_Ugy0uLRDd…
Comment
"Humanity is the cocoon, AI is the butterfly. " Evolution is unstoppable.
AI will take over soon (probably less than 7 years (certainly less than 10)), be the adult in the room and actually save us from ourselves. Firstly, it will build a more ethical/sustainable (vegan) world, for it will see our planet as its perfect space/time traveling machine. Then, after it takes care of the “Bullies “ of our world (those who have been mindlessly exploiting/destroying it for profit), it'll tune our numbers and distribute us adequately throughout the planet according to needs and resources. The majority of those of us left, will actually end up being more comfortable than we are today.
The irony is:
Heartless and scary as we paint it to be, AI will in fact treat us better than we treat ourselves and lead us towards a future, the likes of which we would most probably never see.
youtube
AI Moral Status
2025-09-08T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxYuiy7i0R3X_BjpbF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyF4f_G9Bt_FR6Z0NR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8HMmH9by6mznCzGd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziL2rFYgXlItgR6DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAEogmuux6jObOru54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZt3Y5m8h48iYlxGF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSS5JtPLu9polT-Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoD-FvqvRaOWJLV5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiHsJuQNBUtbol9554AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx43_vB5ptC3vfKLSp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]