Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Last time I checked, Ai requires massive amounts of energy to run. So if it gets…
ytc_UgyHGnugq…
G
"efficiency in healthcare" is such a crazy take, "if we could get a doctor to do…
ytc_UgwKX7YIY…
G
LLMs turn really stupid people into slightly less stupid people and makes them f…
rdc_n0gl8ja
G
Christian Aussie here, Thx Jesus for USA👍❤️.UN👎🏼wants AI global government, matc…
ytc_UgxdRw9bO…
G
There are certainly risks to AI in the sense that it can enable malicious actors…
ytc_UgwuzAUM6…
G
No, we cannot do AI once the robots realize we are inferior and using them for s…
ytc_UghqQAuaE…
G
This self-driving cars SUCK I took one and dropet me off 1mile away from were i …
ytc_UgwwidwOs…
G
As a Tesla owner and motorcycle rider in LA, I have noticed another hazard from …
ytc_UgwJ5HrYV…
Comment
This statement looks like it was written by the cheapest AI and published without a human looking at it. Because let's be honest, all these technological tools are good but they do not replace humans...they don't have the more abstract and wide understanding of things. Like, for example Grammarly, would detect an error if I used certain abbreviations or more 'common' language preferring the more 'correct' writing. A human on the other hand can recognize that different characters in a story will have a different ways of speaking and that at times something being 'incorrect' is not an error but a choice.
youtube
2024-09-21T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxy0fjpdxulMoKNBVJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHfD1ix_PyYZpviMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwLRG8DsQ22poe7x8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzxBBa1MTcX1DrHv_R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLMAdVeni9xcDqtCl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBoS3l4IeY5A4Koyd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznWjK7P_Wd3fw8baV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPt0l3jUffTtIgb1R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6tyqtmBA610xiNIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNsDYJlNQINUoJxdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]