Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@timbonator1 we build frames for capitol equipment used by chip manufacturers. …
ytr_UgzugsPcK…
G
electronic components of anytype cannot 'think' and@what point does this fraud (…
ytc_UgyPhmRTI…
G
This is actually true: it turns out AI trained on human data adopts human traits…
ytc_UgxV6Sg90…
G
Solar flares and sunspots as in Electromagnetic frequency and electromagnetic pu…
ytc_UgzGGmj7_…
G
@Shedding click the related video. Tell me: how long ago was the full video post…
ytr_Ugy8igqoK…
G
Should publically funded news agencies (state media) be allowed to use AI to com…
ytc_UgxH85QdP…
G
AI is inevitable. Leaders in the industry should be developing AI security syste…
ytc_UgzQcfnB1…
G
you couldve drawn the ai people as literal prompts in the thumbnail (It's still …
ytc_UgwpfP-xJ…
Comment
Wrong...in several researches in medical diagnosis comparing the accuracy of top doctors vs free LLMs, the accuracy of the former was ~30% vs 80% achieved through later....when consumer would realise this big inaccuracies among the doctors they will certainly start chosing AI maybe complemented by a human doctor
youtube
2025-06-30T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxDGrte9OmkLnceaCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6rPpVvrfJz0SOjWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwqk094kXFFO-81wwx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqAF8lNAfUnGHDYat4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxjdhf3qtZOD3g4Hk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz40LkosPhK1kw1IiB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvJh6KTjq1qmBWLfJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqlrSrumtZfbV5lOJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGDJUEuhYQrRbmPSV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw81pSER-LkjO4qB9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})