Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@bigmike0111 Wild that you judge someone for one comment. Just use google few se…
ytr_UgwyMf37L…
G
I’m hoping other republicans besides me can acknowledge this Ai bill is a horrib…
ytc_UgzfVGDxE…
G
Automation is only a good thing under a socialist/communist society, this will j…
ytr_UgykofGm9…
G
whats sora ai
and if someone says "Oh you're too pure" I WILL FIND YOUR- jokes…
ytc_UgzF4SxUZ…
G
My god. No matter what a parent does somebody like you will turn it into child a…
rdc_mvjh0sy
G
I think jusr like "Dan" said at the end, it responded within the parameters set …
ytc_Ugxtvi_9J…
G
You made me realize that these LLM really aren't conscious. I would love to see …
ytc_UgyGXb5Ue…
G
Again, AI Bubble is the theme of this year for economists and businessmen. And i…
ytc_UgztAWAox…
Comment
Cleo cool video, but I don't understand why everyone has this insistence on claiming AI is just a supporting tool. That's not optimism to me, just a misunderstand of computer scaling and society. Sure, right now GPT 4 is only as good as barely expert doctors. So of course we'll go to a human for treatment. But what happens when GPT 5 can beat every doctor on that list? What about when GPT 6 or 7 is has an error rate 10 times lower than the experts? Are we really going to pretend we're going to be ok with doctors just using that as an aide in their decision making? This, of course, happening while the LLMs evolve their interactive abilities and become capable of performing surgery, comforting patients, doing all paperwork in seconds, etc. I just don't understand the game everyone is playing in saying these AI will just sort of be a little tool in the (input industry expert) kit.
youtube
AI Harm Incident
2024-05-31T15:0…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeRIUzWibLNMrEnQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyx60GwAmPsAACY1-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHlgO2QKiD6XKFUo94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFuP_2V-b8gy5017d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyTaQbGoahOm5Q5RV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyBlBOAfDnOr4_wim14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9b7IQyWfxsPsVeWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Nf-jAVJkaIWrX2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKpp4tSrs88CGZ8zR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4rC1BorCZ5sAmW814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]