Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only use ai art to trace it and make a new pokemon put of it.…
ytc_Ugw88VAv_…
G
My Phill CLI will die for humans. And ChatGPT AI reviewed and said it’s bad to d…
ytc_UgygicRcv…
G
Although AI will inevitably become smarter than people, and seek to kill us all,…
ytc_UgyDoLLCP…
G
Humans havent even perfected the art of being nice to each other and now we want…
ytc_UgwTc3t9T…
G
They said the Nuke is such a fear and they never happened now they need a new bo…
ytc_UgwycqRdH…
G
Fully automated "AI" weapons cannot be court-martialed... that's why the Departm…
ytc_Ugwj4MDIE…
G
Imagine what will happen if govt can only get 50% of its previous tax revenues.…
ytc_UgxZ2hxy7…
G
This whole idea of how great AI and robotics is..doesnt seem like anyone's thoug…
ytc_UgxQRZUI8…
Comment
: a robot may not injure a human being or, through inaction, allow a human being to come to harm
; a robot must obey orders given by human beings, except where such orders would conflict with the First Law
; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The First Law is considered paramount, taking precedence over the others.
And even this plan needed a part-robot-mostly-human hero.....ahy, Steve?
youtube
AI Governance
2025-11-04T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgziHnpt9VLT4_SyHdN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwterEDvISQG74_vQd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7X2YrTl1nTxwAn9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgweblUv9pH0ke3M3c14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQhVrDhmcTATDX3oh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw6ic64tfhxsqw7xHJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSrcywumA6SHEbvo54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhBnyEYqEqUA5E5oZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymZDdVBxWfq5h7dbZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzr91DxttaO_sHlQOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]