Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But not everyone will want AI doing procedures on them. Only takes a EMP to whip…
ytc_UgyTSbXeS…
G
Ya it should not be testable for any organism, but plants can respond to things …
ytr_UgzqA70FX…
G
Big beatnick snaps at Hank's creative sci-fi thought, "we always thought the rob…
ytc_UgwQuIu_l…
G
Moral grey area. I'd say it's better to use a real photo as a reference though, …
ytr_UgxNfuDE7…
G
This is gonna be part of the Ai against humans documentary when Ai take over…
ytc_UgyPikg4J…
G
No measurable P&L is a big, big thing. It's evidence the AI systems are at least…
ytc_UgzSOs5Ln…
G
Here is something. You have a function f(x,y) and data pairs of x,f ignoring y b…
ytc_Ugx07qJMU…
G
I don't think AI is that useful at this point, but the paranoia comes straight f…
rdc_n0h0iqs
Comment
What we will see, if we don't insist on tight controls over the technology, is very destructive uses, actually created and controlled by humans for purpose, but will escape their accountability or try to, by blaming the AI technology itself...
The AI is capable of magnitudes of good, but just as capable of magnitudes of evil...
It will all depend upon the human's in control's decisions, of what and how it is used...
Look at AI technology as you do, Nuclear technology...
Also capable of magnitudes of both good or evil, totally depending on who's in control of it... 🇺🇸
youtube
AI Governance
2026-01-06T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyN4aIOKk1PppNzFAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy83cVy_VPairZKLqR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxoe_NDzUv6494AzDB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6uxzd6LKFCfeyAll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSOczvMHpRmOomjM54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLJFsnNZHzmP49Ksp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8hz2RVgwq2mrJUpR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxobRx_yj7Gcx-BxVF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzF-F6jR1jxO4jmXN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxJ4MgtN0TPcX8RBbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]