Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats the problem AI is not evil but it will undertsand certain things wrong and…
ytc_UgwiAkr_k…
G
So this post of yours was Auto dubbed with what? An AI app? Couldn't you afford …
ytc_UgwXveLQh…
G
Most people are very chill or oblivious about this but the dangers are immense, …
ytc_UgzvGXEYc…
G
What’s crazy is I only saw one person working, the rest of yall were just chilli…
ytc_UgxeLBpf5…
G
Just wait till those older more experienced coders who understand ai flaws ask t…
ytc_UgzSjdSCc…
G
So as of today, we know for a fact that currently AI is willing to kill us all i…
ytc_UgzarCjj6…
G
I don't know how obvious it is to a lot of people. So much of the overall conver…
rdc_n7todyq
G
We have proof that you killed her. Facial recognition puts YOU right at the spot…
rdc_exghfgo
Comment
I’m sorry, but be is so wrong. All we must do, if artificial intelligence thinks that it will need to “turn off” the human “threat”, is do something that artificial intelligence simply cannot do. There is only so far that artificial intelligence can reach, and luckily we absolutely know that the human o
capability and human reach is quite literally infinite. And by the way, viruses, human biological “viruses” absolutely do not exist! No “virus” has EVER found, isolated, or photographed or sequenced. The “virus” is the BOOGYMAN.
youtube
AI Governance
2025-11-18T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqHzVlUvyo6ymWNLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVs4nMFTfeMTk3cCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzY7MIXm9h2zkWwHap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0kI9PKQKHFrEZanF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy6D83PoRolGZT80ZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyxPkQb6L7GFT8Dcct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgypmTzgkSktdoBZY3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJKhVSADMtJne9HmF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyU12hBDskqxOhkzvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6zlhkGYpb20NdT6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]