Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a field called computational biology, I think you will love it if you r…
ytr_Ugw4I9TbF…
G
Yes. AI will eventually be able to replace almost everyone. Intellectually it …
ytc_Ugyw31qVO…
G
It’s mostly offshoring and international labor with the rise of remote work. AI …
ytc_Ugw6zNiZx…
G
Answer is that millionaires who have their own peculiar capital autonomous manuf…
ytc_UgxQzKP3Q…
G
I call youraislopboresme “Royal AI” because I don’t know the -person- AI that is…
ytc_Ugy3uR3YI…
G
The moment AI is involved in a “piece” its called a prompt. We gotta stop callin…
ytc_UgxLSP1Qj…
G
As someone who is chronically ill ((physically and mentally)), I've been very co…
ytc_Ugw03MA45…
G
We should not blame AI for making misakes, thats the only thing that makes them …
ytc_UgxLg69yQ…
Comment
Scary thought - - What if someone builds humanoid robots with substantial physical capabilities? Such robots could fall under the control of a super AI center that's hostile to humans? We could have our hands full trying to shut down rogue AI centers of the sort Musk mentions.
youtube
AI Governance
2023-04-19T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpESE2NiPcWRtLkHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywWb9b3XvXm7Sf0iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxxmjpGb3AezvaVNzl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJcsY5ewsABmgh6wV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws5fNn_M_0i8wMrop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_87XmoXnYAy9ZR6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz21tMrGKtY57o93tR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYBGKjAr6akRbtJ1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ1OTL1d0Lp-T3dmN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvyRIPWSk1qxHXX9R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]