Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for this info! I want to do masters in AI ethics after my bachelors in…
ytc_UgxczKlH8…
G
It looks quite inevitable though. Humans now need to physically exercise, in a c…
ytc_Ugydm17gA…
G
The biggest problem with AI art right now is the method of acquiring information…
ytc_UgzAKVAXp…
G
No way bro let Chat GPT do the spam for him. We're definitely cook, even that is…
ytc_UgxN2rTAX…
G
I have a problem with people saying A.I. did this or it did that, it copied some…
ytc_UgxN6zQiZ…
G
i am taking his talk on ai as opinion. there is difference bw opinion and subjec…
ytc_UgygLh_Mw…
G
One of the best tutorials on ChatGPT, I have watched so far. Great work...Pavan …
ytc_UgwecOEJt…
G
Life is not black and white. You can not solve life with a math algorithm as muc…
ytc_UgyFmxfPL…
Comment
At this point, it feels were just stuck in the AI race. None of these companies would dare slow down development of their models, for fear of being overtaken by the competition. I'm also pessimistic over the chances of some world agreement pact to slow or pause AI development. We could completely outlaw the further development of AI, and people will continue to do it.
Someone is going to lose control of one of these models someday. It doesn't feel like an 'if' question, but a 'when'. And when it does happen, I hope it works out for us lol
youtube
AI Moral Status
2025-12-12T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwoPeMsVfJVfD235KZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNgiTXKTnsd9KAIXl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwILvn9vSF1VnlIrMl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUF9z1CW4NnDWTr5J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUI84MwRB5WxUznB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6rAfqZWNYf9BjA7h4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwT_4ubTRVoQOykPBx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSY4WVINPbp-ZQjEF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyC2u9XjF6TYZxJNk14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugxr1DWydj_B4gaXQmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]