Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes ! Screw humans its boring ! Go to ai art beautiful full, yes empty of meanin…
ytc_UgxS-03Xi…
G
Thank you for your comment! It seems like you are referencing the popular scienc…
ytr_UgyUHb9M6…
G
Sorry i want a robot that's better than an emotional cop or soldier.. Not a rovi…
ytc_UgwzMLQ9G…
G
I don't think face recognition tech gives a fuck about the person's racial statu…
ytc_UgzF9Vmoy…
G
Because with a tool - pencil, paint brush, stylus, ect - you are personally putt…
ytr_UgwOk6tyR…
G
@Potatotutrial My point is that some areas of medicine are less likely to be neg…
ytr_UgzX4_vim…
G
I like how we are using AI to replace the employees that make 65k a year rather …
ytc_UgwYV-dqz…
G
If only hyper-capitalism would get out of the way, we could realize this in the …
rdc_fnxwcq1
Comment
I've had a thought and I am curious what people think of it. Once super intelligent AI exist, it will be smarter than us and will understand the problems of the world better than we do. Watching this video I was feeling pretty scared for the future because of the thought of all this unemployment but surely a super intelligent AI would recognize that most of the worlds population are just decent people trying to live their lives and it would recognize the inequality of the super rich and would be able to come up with solutions itself. I suddenly thought that the super rich as well as criminal organizations would have a lot more to fear from super intelligent AI then most of the regular decent people out there who are just trying to live their lives. What do you all think?
youtube
AI Governance
2025-11-17T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7MUUT0JXEFAQT8gJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxxmr4pFpBt2RKaQqB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzz7jC1LRYHsC0tJLV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnEpOpEOaEkQWDDd54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0NwBbht_qIudIB8B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzt1tsyBvKbcYWdEDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKFIh41Wvl1TA1d5B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytrFDpxpFTlon6RPV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSIv1oeeZG1U1ZRWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNt5xiBphu9KzXuIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]