Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have zero compassion for this guy, however, the fact Ai art won an art show ki…
ytc_Ugz3GljmX…
G
A normal foot slogging journalist called Nick Shirley went to all the Somali chi…
ytc_UgyFTTEM8…
G
Im Decorator, Pretty sure AI cant get there brushes out to make good of everythi…
ytc_UgyumK4tO…
G
No robot can give you a fabulous facial massage or cut your hair with artistic f…
ytc_Ugzg4ykbl…
G
If AI disappeared tomorrow, what would we have lost? How much would the average …
ytc_UgwlWeJp7…
G
A very interesting conversation about new developments and the future. Imagining…
ytc_UgzU2wCSj…
G
Artists are now teaming up with Disney and Getty images to sue AI art companies …
ytc_UgwqGyJuZ…
G
Well, in all fairness, I personally am not an AI and I never learnt how to ride …
ytr_UgxAoJtXG…
Comment
I keep trying to warn people on both sides of the AI debate that a post-singularity world with AI is dangerous not because we can predict danger or a lack thereof, but because the logic, though process, and reasoning of a self-teaching, exponentially advancing artificial intellect would be as impossible for us to comprehend as our mind is for an ant to understand.
An advanced enough AI could just as easily manipulate people actively, as it could manipulate people behind the scenes. It could just as easily decide to wipe us out, as it could decide to protect us. Hell, it could decide to alter us genetically, psychologically, etc. without our knowledge purely to further its own reasoning. We wouldn't know why or how, and even if we discovered it was being done, we wouldn't be able to stop it.
youtube
AI Governance
2024-01-01T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZURB72pN-H0rj1E14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0rQ6F8W3yOfM5CGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9ucjPjRSa6i5psNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTtmnNWGcKuZlMNCN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6TZlT67za1HQxP4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwy8vtt-SmOS66rlep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwumRVU0VKnHdpT_uN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbKfkDOzcD6cdwE0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwveza6vDs8F_e5-sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3DzDJADYaMeQwrg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}
]