Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm in no way an expert, but it seems to me that the reason humans will still be…
ytc_UgxPbs-or…
G
Kids! And watch a video about it, it's insane what the robots were saying to the…
ytr_UgzzZTg3l…
G
A robot can only do what it is program to do. If they replace humanity, it's be…
ytc_Ugy1yE9ul…
G
Did you? An Indian company imported Dell servers from Malaysia, then exported t…
rdc_lu96t2f
G
Well hopefully the A.I will be a less shit-tier civilization than we are I guess…
rdc_kvdvhl6
G
Is it true by definition (“as an AI”) that an AI cannot be conscious? What about…
ytc_UgxncLnx0…
G
A TV CBS show that that uses AI for good that becomes dangerous is "Person of I…
ytc_UgwDxl56T…
G
hi, nice video, I'm just going to point out that chatgpt is structured to what y…
ytc_UgyHbMeQH…
Comment
the physiological response to emotions is very important if you're dealing with an entity that can harm you.
if you piss off an AI, you're going to want to immediately be aware so that you can diffuse the situation or flee for your life; or even more practically, mend the relationship to secure a safe future for yourself.
if you have no way of knowing that AI is mad at you, you are at a SEVERE and precarious disadvantage.
picture yourself working at the zoo and you unknowingly do something that pisses off a lion. i think you'd like some immediate (physiological) indication of this fact. if you don't have it, you might not find out until he's already chewing your flesh.
youtube
AI Governance
2025-07-28T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUHbQnM62I5UpvjIN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugypk_9m1ym464xPR8p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxCascvEqWuXP3WNtF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY49Mvi90aYe8cRD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGBJxNqSuHGSxmocp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVLY2AtJ6XCnYPett4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7jsN8dwyBm71_owx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyhcpSX3wkpSgN4G-Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyHXtbaL52okmEjTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSAaG20xpShTwvk914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]