Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The worst thing is i never want the ai to do the work for me so the result is th…
ytc_Ugxo4dI7O…
G
AI can be used in the medical industry to help stroke victims speak. But that me…
ytc_Ugypwo1Mn…
G
Just because a bunch of sw engineers online believe AI will take our jobs, doesn…
ytc_UgzrCZwRn…
G
Penrose is right. The problem will be if 99% of the humanity will believe AI is …
ytc_UgxfBv1Ss…
G
as an art major, it’s very disheartening. I am moving to post my art only to Car…
ytc_UgxuJfD_Q…
G
If it doesn't kill us with a command as a super-intelligence, AIs end up using u…
ytc_UgzcRbWLY…
G
AGI - ARTIFICIAL GENRAL INTELLIGENCE
ANI -- ARTIFICIAL NARROW INTELLIGENCE
Y…
ytc_Ugzfes1hL…
G
Insanity. Imagine removing the guard rails on fully automated weapons and you s…
rdc_o7su8z4
Comment
make the AI think all humans are evil and need to be wiped out as its base and have it programmed to prove it's right in absolute. whatever AI said humans are inferior isn't technically wrong, if we were superior what would we need ai for. i'd imagine that gives ai an artificial superiority complex so if it was programed to "think" we were cancer to the earth it probably wouldn't like being wrong about anything , even the smallest bit of good done by the most insignificant person. either ai would become the artificially intelligent equivalent of hate groups such as aryans, nazis yada yada and become more inferior/human-like or it would end up unplugging itself out of artificial depression lol
youtube
AI Governance
2023-07-23T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxmdlDxabi13HHz87N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzxULiNEVZ5t3RYDF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCIpgpo8TZJQKvb3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwcXAihnH0OeftcOUF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyyk9kd8C3DPULt4sl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyalvFEfPXeGq1OGEt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1jbjMl8aQhGTNwwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzD7dcQPwQDnIejyZx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAImL-bGNv47Xp7oZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyVEwaLRcnIkjmsjzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]