Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
like real human clinicians or ai clinicians... seeing as everyone is firing peop…
ytc_UgybG9sSB…
G
I suppose the list is about change. If they were very bad to start with, even a …
rdc_gtds7ib
G
AI technology will need to be serviced and revamped constantly by humans😢 change…
ytc_UgyvTVWjN…
G
GENERATIVE AI means its making something, like creating text or images. This is …
ytc_Ugwl6YwlQ…
G
😖 Uugghh. What a punch to the gut I felt, when u said AI wrote the takeover sena…
ytc_Ugwn_0wcN…
G
😮 Disney made a deal with openai😊 AI is not a bubble it's a Manhattan project😊 c…
ytc_Ugw0_n2H4…
G
a neural network cannot create their own rules because it cannot prove nor dispr…
ytc_UgyOTx5PI…
G
Guys those ais are from databases and is from artworks of artists over the inter…
ytc_Ugz5MkIg6…
Comment
Why do i feel like this is a paid advertisement by openai? These examples must be explicitly given by openai since they know that the LLM will generate pretty consistent results with less margin of error.
The real test would be when you give this LLM a real problem to solve.
youtube
Viral AI Reaction
2025-08-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxUW1DQMsIjHU3oI7x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1fhltiyTOb8Qa_BB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugykq2yg3Svn9bgiyi14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwImXw1-CU1FRaHBt94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-QifQrRSU_lvlQ7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgysYhWv-PihFaezsip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiCxEBI8Ma78Qyydl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRpnyr4FuK0pudoUF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyGQh4so_1WDMY29d54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWXaj9HXaOk9wT_e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]