Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
99% of jobs will be done by AI agents and robotics.
The future belongs to AI 💯
…
ytc_Ugz-WNKxz…
G
Conscious or not, AI is not a statisticle predicting software. AI produces mathi…
ytc_UgwbpiLGP…
G
Is there actually a nationwide social credit system? I have disdain for the CCP …
ytc_UgwSOybvj…
G
I tried to explain to people we are not ready for AI because we still have not s…
ytc_UgyyPCvzL…
G
I was skeptical, and followed those who were skeptical of yudkowsky going back a…
ytc_Ugy0vow4X…
G
That movie doesn't account for the growth in AI over the last year and a half th…
ytr_UgwaajqJ1…
G
“AI is only ‘bloatware’ if you misunderstand what it does.
It’s not extra fluff …
ytr_Ugz8w69Cs…
G
AI is progressive folks, so you should blindly support it. I mean don't all you …
ytc_UgxKYfjWL…
Comment
Remember, this will be a "thinking" system.
Imagine if "A military group" (Government sponsored or otherwise) tells the AI to "kill those guys"..... But the AI thinks about it (over about 10 seconds) and decides those giving the instructions are acting contrary to either it's core programming, or a set of "morals" *it* had developed since it's "creation"?
(Of course, it wouldn't divulge those morals without being asked and it's unlikely anyone would ask.....?)
.
The best that military could hope for would be a refusal to act by the AI.
The worst?
Maybe the AI would remove their ability to either attack others, or defend an attack?
.
Easily done.
youtube
AI Governance
2023-03-30T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxBbjHdNEXCBjidVfZ4AaABAg.9nsv8z4LZEt9nsvHhBQyLi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugyn8vRMgBLNpaTFoeJ4AaABAg.9nsuZtTfUIb9nsxmDm3iEf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwD5K5LX9k7VcdSeYZ4AaABAg.9nsrH1bRpzH9nsss5rpbvT","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugy_bymuQCOqeV6s8Sl4AaABAg.9nsqsDyTBSf9nstu0Pqwrd","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyHLe62bx7LqEPZlV54AaABAg.9nslC6fF1eg9nsmYSKG1ad","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzUvcNmWdhebmM0rq14AaABAg.9nsiiFbt9uv9nso-HFEp4z","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgyQuu1agpBnCarU-WV4AaABAg.9nshRnwXsTD9nstNXShz5G","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz_ExjirZyC-Mmo8sl4AaABAg.9nsh0_j6Fq69nskh_BrUbB","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugws9LG6oGPBF1DhUfx4AaABAg.9nsPu9YTMQy9nsmHJwbPS8","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugxuw4r-QaSU79GjGdd4AaABAg.9nsPIpCpSlP9nshLvBQVsD","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]