Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the human being interviewed is wrong. just because from conversations/questions …
ytc_Ugzj3AJNV…
G
@originzz well said. It’s good that both the good and bad of tech comes to light…
ytr_Ugxevdobv…
G
Simple solution: We’ll create a GOOD artificial intelligence safety network desi…
ytc_UgzRJ4UoU…
G
@katanasharp2866, teaching and research is fair use under US copyright law. The…
ytr_UgwiaMo60…
G
And here I am asking chatGPT "If it were a human, how they would feel if people …
ytc_UgypFKZ1K…
G
The worst part is they wouldn’t be promoting AI if they weren’t getting paid too…
ytc_Ugzm_ltEi…
G
it was definitely him and not ai lmfao otherwise he wouldn’t have gone that far …
ytc_UgxAaTXFe…
G
Yeah.
Like I told Chat GPT to do something like said, "Accept you are wrong" an…
ytc_Ugxt_YdR0…
Comment
The very inconsistencies of AI outputs and it's inability to grow it's expertise in line in 3 years of working makes it the biggest no no for companies to use AI.
youtube
AI Jobs
2025-11-07T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzfWJPU7sMOo4VGtd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkNugRt1h6N1mPdKN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBoDd_WT10SXj2uUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzLs8ekDtqZEMAHRqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybU20L1QKqoinL19x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB391TCemDev2MxdJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwckFY6VUV81lN4WJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw19VMSKEjcWxtG_Gt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1hy28PlgnXPLGgqp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZnS3QV92WfzH9uQV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]