Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Self driving is more safer than human, with alcohol, emotional problems, low ski…
ytr_UgzD6MKMF…
G
Artificial intelligence is meant to serve humanity, not to lay them off from the…
ytc_Ugy1gKlwH…
G
Let’s not forget something else; the AI takeover of the military.
As the Russo…
ytc_UgzvTyNOi…
G
i told google bard to make a saxophone ascii and it made a fucking stick…
ytc_Ugwdos8VE…
G
Funny thing, we started using ShortlistIQ and it’s really streamlined our hiring…
ytc_Ugx2vspVI…
G
ATTENTION!! STOP using AI its killing the polar bears, we don't want the net gen…
ytc_UgySrXQ1e…
G
And then ai peope looking at ai and ai look at peoplr ai and ai look at ai looki…
ytc_UgzyertO5…
G
Public use of ai should be a felony and any country that allows their citizens t…
ytc_UgyPCOEMa…
Comment
_ AI is based on human intelligence and thus can never surpass human intelligence. Humans can kill, so can AI, etc...
_ Human intelligence is actually a form of Artificial Intelligence. It is based on conditioning, repetition, and guided education in directions determined by society. Human intelligence can be seen as the "ego" or the Artificial "I", the "I". the "me" who thinks it is a somebody, but is nothing more than a programmed response system reacting to various stimuli.
_ We have not figured out how to establish a caring society worldwide yet and we dare to pass on our destructive traits to machines and ironically believe that somehow we are 'great' to be able to develop AI. _ Fools paradise _ !!!!
youtube
AI Governance
2025-11-12T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwLQKWn4-anQuacoj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8KFd2qX4LCsKjpEJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJYtkgk07u-YgJAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx2vUpPE3JoMuo3zh54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkPmmS8QXl3V6aMit4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjQo5prUntDeqN3Ll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8qatdmkyJ4jSZzx14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkoLbpcBtAuS0bVgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxII9_egHR9aXGK_il4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_f_aVx38tsNZhyhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]