Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This!
AI is really applied mathematics. Fuck the coders you need those math gee…
rdc_k8t36yz
G
Many of today’s drivers are part of the problem forcing the hand of companies to…
ytc_UgwkfU5lF…
G
You used AI. You are part of the problem. We should completely abolish it, or el…
ytc_UgwT9iH5R…
G
The AI are not what you think. They are what we call "demons" they found a way t…
ytc_UgyZ7ATKl…
G
look, its a LARGE LANGUAGE MODEL. it knows words, grammar and context. Why would…
ytc_UgwjEodrv…
G
Chatgpt already described to me that hateful contents are considered/"felt"as in…
ytc_UgwFb26J8…
G
Dude how is anyone amazed over this bs we knew this was gonna be like this. Ai w…
ytc_UgxTuWP6S…
G
"Greed has no limits" This line was for the companies who are developing ai and …
ytc_Ugzv7SWtO…
Comment
Any veteran System Shock player out there knows it´s a pretty bad idea to have some S.H.O.D.A.N. running amok. Pull the damned plugs already!
AFAIK, at least DARPA has tested autonomous turrets...the things where firing all over the place, allegedly even killed one technician in the process - as in "on purpose" since he was a threat to its continuation...plug was eventually pulled but just scale that crap up...
youtube
AI Governance
2024-08-04T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyskpvfexc6emo3rEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdwEKP59iAR3nLA394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzEXjbaJ5-XOiRA8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFicw_DtN5tTyvVd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFA8cs4N4WKTlPNQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx8AweY_rEkjONkRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOi6E4Xt_MBhG6aox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmFBg_ZrFZbejgFdB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyU6hQJ_vS9F3kKHzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyjE6XPjOheWnvnh2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]