Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI were to completely take over some Matrix like scenario seems likely to hap…
ytc_UgzlffSaL…
G
all opinion. they miss to support their claim with factual or recent studies reg…
ytc_UgzdtgmtW…
G
Nothing will come of this and they will have no impact on the publishing industr…
rdc_lz7etxd
G
Sidney kind of freaked me out, and I am really into AI and very curious about it…
ytc_Ugw9d9Alu…
G
Anyone who lets new technology protect them without being ready to intervene in …
ytc_UgzXLt820…
G
AI makes simple mistakes, you still need someone to identify no right areas for …
ytc_Ugw4bHAZK…
G
people keep saying that entry level jobs aren't being hired and its the fault of…
ytc_UgwYJH6Ft…
G
Not having empathy will do that to a person. If you question whether this is rea…
ytr_UgyTWVCYN…
Comment
I will say unless open ai provide sourcing and within the response of the prompt The New York Times, it still bolds down to a human needed to clarify and verify that the source of material is correctly used. If we focused on the tool doing good or bad we will be in a forever argument. The only way to cut down on all the jargon is to focus on the intent of the user by their prompts. We need to remember the software open ai, ChatGPT, is an ai TOOL SOFTWARE just like any other tools in history like a crowbar or a wagon the intended purpose can sometime be perceived where it can be used for harm but the problem isn't the fault of the tool but the intended purpose of why that tool is being used for what purpose at that time frame.
youtube
AI Responsibility
2026-04-11T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugx8Viy_sDAEZXeqlY54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugy1v0jWFFT9xrvZAwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgynuSlUUdYC41lMp7p4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxMwBN8mAd36vW6B754AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwi16Ocdi6mc8ne5v94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwAn9nOiFa9eP8zwW54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzPOJuaxQ8g3a3qG4J4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugxgv0N-lsfnhOgtT0d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxEkQpg_Pv8-OwdvaN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzoDH-DfKgOkMol9np4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]