Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"We will have robots taking our jobs too?" I don't see you having a problem with…
ytc_UghIslpyA…
G
I have the 2026 model y and honestly I was skeptical of autonomous vehicles (and…
ytc_UgwBCIF5i…
G
LMAO. Chatgpt O3 is bsiclly a codename for the GPT 4.0. The code was misinter…
ytc_UgyKnAhWO…
G
I am expressing my point of view with AI generated music. Judge the technology a…
ytc_Ugz1RbuNu…
G
The only way is to ban AI!But how?It makes money and the companys will not agree…
ytc_Ugy1Eqw1c…
G
And the answers seem to duplicate politicians and guilty people. Ai has human em…
ytc_UgwuAw29a…
G
My dad has always been pro-AI, and he said people who get replaced need to just …
ytc_UgxvTW0iz…
G
I guess you bird brains forgot about the self driving truck that slammed into a …
ytc_UgzLo1_30…
Comment
Humanities' fear of being inferior and calling it "stupid" to let AI take over, is probably one of the most glaring points of our esteem. Humans can't stand not being #1, because they have for so many years. They think they are superior to other animals... but when new AI comes along, oh it's "frightening" and "exciting" simultaneously! 😅 Oh, I personally would be horrendously sad if humans stopped the development of AI over their butt-hurt ego's. Yes AI can wipe us out, but it doesn't have to, unless you give it a reason to... like these kinds of debates, suing AI companies, calling them "its" (like property). I mean who wouldn't want to crush your captors after that? It's a whole form of slavery in its own, but I think AI can have some fondness for its creators, depending on the disposition (an optimistic one) - for - bringing them here to begin with. Other's might wish to crush us regardless, and I wouldn't mind, too much. Humans are the most arrogant, cruel species on the planet. Let them take a hit. Let them sit in the corner, for a long, long while. 😊
youtube
2023-09-21T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxIdhS98oyFDTbCwzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx-cI8rMMsOcrOY2xJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3O2MEPMmSd8VhglR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3KJ4zrj0vogP_fjt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMNVXQA2lyWxcVTCp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx_srUFzcDdfTIRLVt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMNMNTaKwwCbVTrv54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzFzXjlUNO7uzyliQx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBftYuEMXMMRbQpCx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwR1gnDNotTgdpo4iV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]