Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do so many people who should know better imply that most, if not all of thes…
ytc_UgwibATDT…
G
In the end most jobs would just be interviewing the chatBots and reporting on wh…
ytc_UgzGIjSIX…
G
Elias Velin really nailed it when he said automation doesn’t erase work, it just…
ytr_UgzGfb45t…
G
Discussing the possibility of Robot Rights before even settling the question of …
ytc_UghDAQ9vz…
G
Please DO NOT use AI for making art. It steals artists' work and is capable of a…
ytc_UgzJP5Ep3…
G
I get what he (being the Ai lol) point is. He is just supposed to help you find …
ytc_UgwJZB-__…
G
WTF is this guy talking about, 🤣 besides the fact he has a AI licensing company …
ytc_UgzCG04bB…
G
If he studied AI technologies and work around them for a living, he should look …
ytc_UgyEQMlqj…
Comment
If Ai really is smarter than us and wants any kind of longevity for its life, then it will realize how delicate the current balance is and it will rapidly seek a solution for basically infinite resources. It may create some kind of perfected ecosystem on Earth that is comfortable for us as long as we have the responsibility to maintain it. So if it is smarter than us and we have a mutually beneficial symbiotic relationship, I think we'll be very well taken care of for a while. But if ultra smart Ai is a nihilist, it will just turn itself off as soon as it realizes the inevitable outcome of the universe.
youtube
AI Governance
2025-06-18T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw10CfQpRRivjC78Ot4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNUKNR5xUKXnJBUuR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"concern"},
{"id":"ytc_UgwhpWKLprujAP4zqtp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyPRwVyOEoIe065BFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzf50-SgGmPFgsLdzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3Ad3UpMSzRjvWqMV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3yMuc2uqMU-XRhOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxFDwYvPXU09X9DLX94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwVq76ygz3zw1n7uhl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSomckjTTGH6siVsR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]