Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have big ai concerns but I feel like testing chatbots to make good business de…
ytc_UgwUlSre3…
G
What a crock of fearmongering shite.
So-called "AI" is still a hypothetical co…
ytc_UgwFbI3ud…
G
@WolfHeathen
The problem is, not all accidents are repetitive.
There are alway…
ytr_Ugw-Yx2NX…
G
The robot will not be injured, has no feelings, and owner also can help the robo…
ytc_UgwerFG5p…
G
I hate AI images as much as the next guy but oh my god twitter artists need to g…
ytc_UgyA70rYj…
G
I am Right now working in Sales and I can't really imagine how KI should replace…
ytc_UgxbdQsCf…
G
"The AI effect" is that line of thinking, the tendency to redefine AI to mean: "…
ytr_UgwaeuHXz…
G
The thing that most people seem to miss (though Geoffrey Hinton alludes to it) i…
ytc_Ugwjgv8G0…
Comment
It's disheartening to see how fear often becomes the default reaction. If AI truly possesses the intelligence we attribute to it, it would logically prioritize actions that ensure both our survival and its own, recognizing that our well-being is intricately linked to its own.
youtube
AI Governance
2024-03-13T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYEw4MCZC6EOoFVwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwzVonEx2c8ZQHTcPR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRk0z5FryI3q61Wo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwslHYHdwu7qTWeywN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqicRGXRXM4pmh8kh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw0f7PAyu-743d2xO54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx4M6kjy8D8eyQKYhF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyr7SZ22tNky3Vu4M94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyrv4rHeoYpcS2XOGl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNiPEBQI9Txbt6mNx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"}
]