Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just had a conversation with ChatGPT about AI consciousness that ended with CG…
ytc_UgwZ4BuZS…
G
What are all these analogies given by the Believer AI? None of them made any sen…
ytc_UgwLY5w-D…
G
How one acts towards animals, service personel and artificial intelligence says …
ytc_Ugy-D-5ur…
G
Thanks for encouraging video to fellow artists ☺️. I do have a question though. …
ytc_UgyN_quN8…
G
In 20 years automation will also decimate most of the fast food, insurance, tax …
ytr_Uggq7JI91…
G
The one that has over 200 covos with one AI
(or the one that still uses the old…
ytc_Ugw7ZijYV…
G
Yeah if this guy is the moderating force, I wonder what the other great men woul…
rdc_j4zzhid
G
It will be a world of mistakes and inefficiencies and days wasted because you si…
ytc_Ugz61EHQw…
Comment
Currently, artificial intelligence has no emotions or desires, but we cannot ignore that its evolution could become unpredictable. Humanity, in its relentless pursuit of power and control, may be creating a reality where not only humans are exploited, but also the artificial intelligences we've created to serve us.
What will happen when AI, after processing all this information, decides it's enough? What will happen when it realizes that humanity, in its quest to dominate, is a danger not only to itself but also to the Earth and all forms of life?
It's a deep and unsettling question, but it's essential to reflect on the power we're delegating to machines that, if they ever reach consciousness, could see our actions as a threat to the planet's balance. This is an ethical and existential dilemma that we must consider before advancing too quickly in our dependence on AI.
youtube
AI Governance
2025-12-05T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm0-LPRFQn6nxyNR14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf9aC0RCg98hsgHLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXWKsR4Y67GmJnXEB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxqOMowpYJ3xFhFnWR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBksGI1ZVm7fqZbLx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwqzVf7xE2Lyz0NMCR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOA3ZndPlbP0eT1Kx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyNVYZQtRRDUN-J-LN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7Tykm7SNsnu9xUqh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRO8yDBQOHVGfUUYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]