Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The great question really would be - “ aligned to whose set of values “ ? Misali…
ytc_UgyYbhXni…
G
LA REALIDAD ES QUE ESTAMOS EN UN MUNDO DONDE LA ROBOTICA DE APROPIARA DE LA HUMA…
ytc_Ugw7tVmws…
G
I'm still picking Mayweather. Would love to hear the robots post fight commentar…
ytc_UgxN75TKx…
G
AI told me this story was real after i input “sodium bromide” and apparently it’…
ytc_UgxgoMQro…
G
Dude. I feel bad for him, but in the most respectful way possible, how dumb do y…
ytc_UgxrS3_vC…
G
Dead ass I’m more worried about the girl robot she got way to much sense, she kn…
ytr_UgyAMZqlk…
G
Yep, reading text and counting your fingers. Funny how those are difficult aspec…
rdc_mth4ltn
G
What nonsense. AI isn't real. ChatGPT is just a giant reddit based bot that sp…
ytc_Ugytlm_7q…
Comment
there is no way to prevent those companies from developing that. What are you going to do ? impose government regulations ? the same government that wages war under the ideea that it is for the people of their own country but in reality it is for the interests of a few people in the leadership ? ^^ good luck. I could only hope that if an actual Ai, a real AI is developed, that it would be capable of reason and logic, maybe more than it's creators, so we could at least eliminate petty human values and emotions from the equation.
youtube
AI Responsibility
2025-07-24T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5YMRv48i9gwvlXsp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx33hW5GVkP7m0ihHB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwW-RB8v2IduzmaF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNAtR7ntolVwPAh514AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnpMeQk1zLG2eQIUV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypaDha6Ma3rYGqVAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwc7MnaIYJMWCVJy2d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyq9wj1shnWh6OuPx94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbDO3WwpPmcMQzMOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyWwKhvz0rDqlOYmf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]